Menu English Ukrainian russian Home

Free technical library for hobbyists and professionals Free technical library


Lecture notes, cheat sheets
Free library / Directory / Lecture notes, cheat sheets

Informatics. Lecture notes: briefly, the most important

Lecture notes, cheat sheets

Directory / Lecture notes, cheat sheets

Comments on the article Comments on the article

Table of contents

  1. Symbols
  2. General theoretical foundations of computer science (The concept of computer science. The concept of information. Information coding system. Coding of text information. Coding of graphic information. Coding of audio information. Modes and methods of information transmission. Information technology. Stages of development of information technology. The emergence of computers and computer technologies. The evolution of the development of personal computers. The structure of modern computing systems)
  3. Computer technologies for information processing (Classification and design of computers. Computer architecture. Memory in personal computers. The concept of command and system software of a computer. Basic input-output system (BIOS). The concept of CMOS RAM)
  4. Hardware and software architecture of IBM compatible technologies (Microprocessors. Motherboards. Buses, interfaces. Control tools for external devices. Information storage devices. Video controllers and monitors. Information input devices. Information output devices. Information transmission devices. Other peripheral devices)
  5. Fundamentals of user work in the operating environment of a personal computer (Operating systems. Classification of software. Purpose of operating systems. Evolution and characteristics of operating systems. Operating system of new technologies. WINDOWS NT architecture. Installation of WINDOWS NT. Registry and configuration of the WINDOWS NT operating system. Features of the WINDOWS 2000 operating system. Network operating systems. Family UNIX operating systems. Linux operating system. Novell family of network operating systems)
  6. Fundamentals of work in the environment of local and global computer networks (Evolution of computer networks. Basic software and hardware components of the network. Types of local networks. Organization of the domain structure of the network. Multi-level approach. Protocol. Interface. Protocol stack. Organization of accounts. Management of user groups. Management of security policy. Management of network resources. Network services. Tools that ensure interaction with other network operating systems. Organization of work in a hierarchical network. Organization of peer-to-peer networks and technology for working in them. Modem types of networks. Installing and configuring a modem. Organizing a connection with a remote personal computer. Working with switching programs. Working with fax machines. modem)
  7. Internet networks (The emergence of the Internet. Possibilities of the Internet. Software for working on the Internet. Transfer of information on the Internet. Addressing system. Addressing and protocols on the Internet. Problems of working on the Internet with Cyrillic texts. Establishing a connection with a provider (entrance to the Internet). World Wide Web , or WORLD WIDE WEB. Intranet. Creating a Web page using Front Page. File information resources FTP. Electronic mail (E-mail). News or conferences. E-commerce. Online store. Internet payment systems. Internet auctions. . Internet banking. Internet insurance. Internet exchange. Internet marketing. Internet advertising)
  8. Fundamentals of working with general purpose applications (Definition of application programs. Text editors. Spreadsheet processors. The concept of shell programs. Graphic editors. The concept and structure of a data bank. Organizer programs. Presentation preparation programs. Working on the Internet with MS OFFICE applications. Stages of solving problems using a computer)
  9. Specialized professionally oriented software tools (Information systems of organizational and economic management. Modern information technologies in systems of organizational and economic management. Information systems of organizational and economic management. Office activities in systems of organizational and economic management. Organizational, technical and peripheral means of information systems. The concept of business graphics. Use of graphics in business. Business graphics program MS GRAPH. General characteristics of the technology for creating application software. Application software. Technology of system design of software. Modern methods and tools for developing application software)
  10. Fundamentals of algorithmization and programming (Concept of an algorithm. Programming systems. Classification of high-level programming languages. VBA system. VBA programming language)
  11. Fundamentals of information security (Information protection as a pattern of development of computer systems. Objects and elements of protection in computer data processing systems. Means of identification and access control to information. Cryptographic method of information protection. Computer viruses. Anti-virus programs. Protection of software products. Ensuring data security on an offline computer. Security data in an interactive environment)
  12. Database (The concept of a database. Database management systems. Hierarchical, network and relational data representation models. Post-relational, multidimensional and object-oriented data representation models. Classifications of database management systems. Database access languages. Internet databases)

Symbols

ALU - arithmetic logic unit.

ACS - automated control systems.

ADC - analog-to-digital converters.

LSI is a large integrated circuit.

VZU - external storage device.

Memory is a storage device.

IPS - information retrieval systems.

HDD is a hard disk drive.

RAM is a random access memory.

OP - RAM.

OS - operating system.

ROM is read only memory.

PC - personal computers.

PPO - application software.

PPP - a package of applied programs.

CAD - computer-aided design system.

DBMS - database management system.

UU - control device.

CPU - central processing unit.

DAC - digital-to-analogue converters.

COMPUTER - electronic computers.

Topic 1. General theoretical foundations of computer science

1.1. The concept of informatics

Informatics (from French information - information + automatique - automation) has a wide range of applications. The main directions of this scientific discipline are:

▪ development of computer systems and software;

▪ information theory, which studies processes based on the transmission, reception, transformation and storage of information;

▪ methods that allow you to create programs for solving problems that require certain intellectual efforts when used by a person (logical inference, speech understanding, visual perception, etc.);

▪ system analysis, which consists of studying the purpose of the designed system and determining the requirements that it must meet;

▪ methods of animation, computer graphics, multimedia;

▪ telecommunications (global computer networks);

▪ various applications that are used in manufacturing, science, education, medicine, trade, agriculture, etc.

Most often, informatics is considered to consist of two types of means:

1) technical - computer equipment;

2) software - the whole variety of existing computer programs.

Sometimes there is another main branch - algorithmic tools.

In the modern world, the role of informatics is enormous. It covers not only the sphere of material production, but also the intellectual, spiritual aspects of life. The increase in the production of computer equipment, the development of information networks, the emergence of new information technologies significantly affect all spheres of society: production, science, education, medicine, culture, etc.

1.2. The concept of information

The word "information" in Latin means information, clarification, presentation.

Information is information about objects and phenomena of the surrounding world, their properties, characteristics and state, perceived by information systems. Information is not a characteristic of the message, but of the relationship between the message and its analyzer. If there is no consumer, at least a potential one, it makes no sense to talk about information.

In computer science, information is understood as a certain sequence of symbolic designations (letters, numbers, images and sounds, etc.), which carry a semantic load and are presented in a form understandable to a computer. Such a new character in such a sequence of characters increases the information content of the message.

1.3. Information coding system

Information coding is used to unify the form of presentation of data that belongs to different types, in order to automate work with information.

Encoding is the expression of data of one type through data of another type. For example, natural human languages ​​can be considered as systems for encoding concepts for expressing thoughts through speech, and alphabets are also systems for encoding language components using graphic symbols.

In computer technology, binary coding is used. The basis of this coding system is the representation of data through a sequence of two characters: 0 and 1. These characters are called binary digits (binary digit), or abbreviated bit (bit). One bit can encode two concepts: 0 or 1 (yes or no, true or false, etc.). With two bits it is possible to express four different concepts, and with three bits it is possible to encode eight different values.

The smallest unit of information encoding in computer technology after a bit is a byte. Its relationship to a bit reflects the following relationship: 1 byte = 8 bits = 1 character.

Usually one byte encodes one character of textual information. Based on this, for text documents, the size in bytes corresponds to the lexical size in characters.

A larger unit of encoding information is a kilobyte, related to a byte by the following ratio: 1 Kb = 1024 bytes.

Other, larger, information encoding units are symbols obtained by adding the prefixes mega (Mb), giga (GB), tera (Tb):

1 MB = 1 bytes;

1 GB = 10 bytes;

1 TB = 1024 GB.

To encode an integer in binary, take the integer and divide it in half until the quotient is equal to one. The set of remainders from each division, which is written from right to left along with the last quotient, will be the binary analogue of a decimal number.

In the process of encoding integers from 0 to 255, it is enough to use 8 bits of binary code (8 bits). Using 16 bits you can encode integers from 0 to 65, and using 535 bits you can encode more than 24 million different values.

In order to encode real numbers, 80-bit encoding is used. In this case, the number is first converted to a normalized form, for example:

2,1427926 = 0,21427926? 101;

500 = 000? 0,5.

The first part of the encoded number is called the mantissa, and the second part is the characteristics. The main part of 80 bits is reserved for storing the mantissa, and some fixed number of bits are reserved for storing the characteristic.

1.4. Encoding of text information

Textual information is encoded in binary code through the designation of each character of the alphabet by a certain integer. Using eight binary digits, it is possible to encode 256 different characters. This number of characters is enough to express all characters of the English and Russian alphabets.

In the early years of the development of computer technology, the difficulties of coding textual information were caused by the lack of the necessary coding standards. At present, on the contrary, the existing difficulties are associated with a multitude of simultaneously operating and often conflicting standards.

For English, which is an unofficial international medium of communication, these difficulties have been resolved. The US Standards Institute developed and introduced the ASCII (American Standard Code for Information Interchange) coding system.

To encode the Russian alphabet, several encoding options have been developed:

1) Windows-1251 - introduced by Microsoft; given the widespread use of operating systems (OS) and other software products of this company in the Russian Federation, it has become widespread;

2) KOI-8 (Information Exchange Code, eight-digit) - another popular encoding of the Russian alphabet, common in computer networks on the territory of the Russian Federation and in the Russian sector of the Internet;

3) ISO (International Standard Organization - International Institute for Standardization) - an international standard for encoding characters in the Russian language. In practice, this encoding is rarely used.

A limited set of codes (256) creates difficulties for developers of a unified system for encoding textual information. As a result, it was proposed to encode characters not with 8-bit binary numbers, but with numbers with a large bit, which caused an expansion of the range of possible code values. The 16-bit character encoding system is called universal - UNICODE. Sixteen digits allows for unique codes for 65 characters, which is enough to fit most languages ​​in one character table.

Despite the simplicity of the proposed approach, the practical transition to this encoding system could not be implemented for a very long time due to the lack of computer resources, since in the UNICODE encoding system all text documents automatically become twice as large. In the late 1990s technical means have reached the required level, a gradual transfer of documents and software to the UNICODE coding system has begun.

1.5. Graphic information encoding

There are several ways to encode graphic information.

When examining a black-and-white graphic image with a magnifying glass, it is noticeable that it includes several tiny dots that form a characteristic pattern (or raster). Linear coordinates and individual properties of each of the image points can be expressed using integers, so the raster encoding method is based on the use of a binary code for representing graphic data. The well-known standard is the reduction of black-and-white illustrations in the form of a combination of points with 256 shades of gray, i.e. 8-bit binary numbers are needed to encode the brightness of any point.

The coding of color graphic images is based on the principle of decomposing an arbitrary color into basic components, which are used as three primary colors: red (Red), green (Green) and blue (Blue). In practice, it is accepted that any color that the human eye perceives can be obtained using a mechanical combination of these three colors. This coding system is called RGB (by the first letters of the primary colors). When 24 bits are used to encode color graphics, this mode is called True Color.

Each of the primary colors is mapped to a color that complements the primary color to white. For any of the primary colors, the complementary color will be the one that is formed by the sum of a pair of other primary colors. Accordingly, among the additional colors, cyan (Cyan), magenta (Magenta) and yellow (Yellow) can be distinguished. The principle of decomposition of an arbitrary color into its constituent components is used not only for primary colors, but also for additional ones, that is, any color can be represented as the sum of cyan, magenta and yellow components. This color coding method is used in printing, but it also uses the fourth ink - black (Black), so this coding system is indicated by four letters - CMYK. To represent color graphics in this system, 32 bits are used. This mode is also called full color.

By reducing the number of bits used to encode the color of each point, the amount of data is reduced, but the range of encoded colors is noticeably reduced. Encoding color graphics with 16-bit binary numbers is called the High Color mode. When encoding graphic color information using 8 bits of data, only 256 shades can be transmitted. This color coding method is called index.

1.6. Audio encoding

Currently, there is no single standard system for encoding audio information, since the techniques and methods for working with audio information have begun to develop compared to the latest methods for working with other types of information. Therefore, many different companies that work in the field of information encoding have created their own corporate standards for audio information. But among these corporate standards, two main areas stand out.

The FM (Frequency Modulation) method is based on the assertion that theoretically any complex sound can be represented as a decomposition into a sequence of simple harmonic signals of different frequencies. Each of these harmonic signals is a regular sine wave and can therefore be described numerically or encoded. Sound signals form a continuous spectrum, that is, they are analog, therefore their decomposition into harmonic series and presentation in the form of discrete digital signals is carried out using special devices - analog-to-digital converters (ADC). The reverse conversion, which is necessary to reproduce sound encoded with a numerical code, is performed using digital-to-analog converters (DACs). Due to such transformations of audio signals, there is a loss of information that is associated with the coding method, so the quality of sound recording using the FM method is usually not satisfactory enough and corresponds to the sound quality of the simplest electric musical instruments with a color characteristic of electronic music. At the same time, this method provides a completely compact code, so it was widely used in those years when the resources of computer technology were clearly insufficient.

The main idea of ​​the wave-table synthesis method (Wave-Table) is that in pre-prepared tables there are sound samples for many different musical instruments. These sound samples are called samples. The numerical codes that are embedded in the sample express such characteristics as the type of instrument, its model number, pitch, duration and intensity of the sound, the dynamics of its change, some components of the environment in which the sound is observed, and other parameters that characterize the features of the sound. Since real sounds are used for the samples, the quality of the encoded sound information is very high and approaches the sound of real musical instruments, which is more in line with the current level of development of modern computer technology.

1.7. Modes and methods of information transfer

For correct data exchange between nodes of a local area network, certain modes of information transfer are used:

1) simplex (unidirectional) transmission;

2) half-duplex transmission, in which the reception and transmission of information by the source and receiver are carried out alternately;

3) duplex transmission, in which parallel simultaneous transmission is performed, i.e. each station simultaneously transmits and receives data.

In information systems, duplex or serial data transmission is very often used. Allocate synchronous and asynchronous methods of serial data transmission.

The synchronous method differs in that the data is transferred in blocks. To synchronize the operation of the receiver and transmitter, synchronization bits are sent at the beginning of the block. After that, the data, the error detection code and the symbol indicating the end of the transfer are transmitted. This sequence forms the standard data transmission scheme for the synchronous method. In the case of synchronous transmission, data is transmitted both as symbols and as a stream of bits. The error detection code is most often a cyclic redundant error detection code (CRC), which is determined by the contents of the data field. With its help, you can unambiguously determine the reliability of the received information.

The advantages of the synchronous data transfer method include:

▪ high efficiency;

▪ reliable built-in error detection mechanism;

▪ high data transfer speed.

The main disadvantage of this method is the expensive interface hardware.

The asynchronous method differs in that each character is transmitted in a separate package. The start bits alert the receiver to the start of transmission, after which the character itself is transmitted. The parity bit is used to determine the validity of the transmission. The parity bit is one when the number of ones in a character is odd, and zero when there are even ones. The last bit, called the "stop bit", signals the end of the transmission. This sequence forms the standard data transfer scheme for an asynchronous method.

The advantages of the asynchronous transfer method are:

▪ inexpensive (compared to synchronous) interface equipment;

▪ simple proven transmission system.

The disadvantages of this method include:

▪ loss of a third of the bandwidth for transmitting service bits;

▪ low transmission speed compared to the synchronous method;

▪ inability to determine the reliability of the received information using the parity bit in case of multiple errors.

The asynchronous transfer method is used in systems in which data exchange occurs from time to time and a high data transfer rate is not required.

1.8. Information Technology

Information is one of the most valuable resources of society, so the process of its processing, as well as material resources (for example, oil, gas, minerals, etc.), can be perceived as a kind of technology. In this case, the following definitions will be valid.

Information resources are a collection of data that are of value to an enterprise (organization) and act as material resources. These include texts, knowledge, data files, etc.

Information technology is a set of methods, production processes and software and hardware tools that are combined into a technological chain. This chain ensures the collection, storage, processing, output and dissemination of information in order to reduce the complexity of using information resources, as well as increase their reliability and efficiency.

According to the definition adopted by UNESCO, information technology is a set of interrelated, scientific, technological and engineering disciplines that study the methods of effective organization of the work of people who are engaged in the processing and storage of information, as well as computer technology and methods of organizing and interacting with people and production equipment.

The system of methods and production processes defines the techniques, principles and activities that regulate the design and use of software and hardware for data processing. Depending on the specific application tasks that need to be solved, various data processing methods and technical means are used. There are three classes of information technologies that allow you to work with various kinds of subject areas:

1) global, including models, methods and tools that formalize and allow the use of information resources of society as a whole;

2) basic, designed for a specific area of ​​application;

3) specific, realizing the processing of certain data when solving the functional tasks of the user (in particular, the tasks of planning, accounting, analysis, etc.).

The main purpose of information technology is the production and processing of information for its analysis and the adoption of an appropriate decision on its basis, which provides for the implementation of any action.

1.9. Stages of information technology development

There are several points of view on the development of information technology with the use of computers. The staging is carried out on the basis of the following signs of division.

Allocation of stages on the problems of the process of informatization of society:

1) until the end of the 1960s. - the problem of processing large amounts of information in conditions of limited hardware capabilities;

2) until the end of the 1970s. - backlog of software from the level of development of hardware;

3) since the early 1980s. - problems of maximum satisfaction of the user's needs and the creation of an appropriate interface for working in a computer environment;

4) since the early 1990s. - development of an agreement and establishment of standards, protocols for computer communications, organization of access to strategic information, etc.

Allocation of stages according to the advantage brought by computer technology:

1) since the early 1960s. - efficient processing of information when performing routine work with a focus on centralized collective use of computing center resources;

2) since the mid-1970s. - the emergence of personal computers (PCs). At the same time, the approach to creating information systems has changed - the orientation is shifting towards the individual user to support his decisions. Both centralized and decentralized data processing is used;

3) since the early 1990s. - development of telecommunication technology for distributed information processing. Information systems are used to help an organization fight competitors.

Allocation of stages by types of technology tools:

1) until the second half of the XNUMXth century. - "manual" information technology, in which the tools were pen, ink, paper;

2) from the end of the XNUMXth century. - "mechanical" technology, the tools of which were a typewriter, telephone, voice recorder, mail;

3) 1940-1960s XNUMXth century - "electrical" technology, the tools of which were large electronic computers (computers) and related software, electric typewriters, photocopiers, portable voice recorders;

4) since the early 1970s. - "electronic" technology, the main tools are large computers and automated control systems (ACS) and information retrieval systems (IPS) created on their basis, which are equipped with a wide range of software systems;

5) since the mid-1980s. - "computer" technology, the main toolkit is a PC with a wide range of standard software products for various purposes.

1.10. The advent of computers and computer technology

For many centuries, people have been trying to create various devices to facilitate calculations. In the history of the development of computers and computer technologies, there are several important events that have become decisive in the further evolution.

In the 40s. XNUMXth century B. Pascal invented a mechanical device that could be used to add numbers.

At the end of the XVIII century. G. Leibniz created a mechanical device for adding and multiplying numbers.

In 1946, the first mainframe computers were invented. American scientists J. von Neumann, G. Goldstein and A. Berne published a work in which they presented the basic principles of creating a universal computer. Since the late 1940s. the first prototypes of such machines, conventionally called first-generation computers, began to appear. These computers were made on vacuum tubes and lagged behind modern calculators in terms of performance.

In the further development of computers, the following stages are distinguished:

▪ second generation of computers - the invention of transistors;

▪ third generation of computers - creation of integrated circuits;

▪ fourth generation of computers - the emergence of microprocessors (1971).

The first microprocessors were produced by Intel, which led to the emergence of a new generation of PCs. Due to the mass interest in such computers that arose in society, IBM (International Business Machines Corporation) developed a new project to create them, and Microsoft developed software for this computer. The project ended in August 1981, and the new PC became known as the IBM PC.

The developed computer model became very popular and quickly ousted all previous IBM models from the market in the next few years. With the invention of the IBM PC, the standard IBM PC-compatible computers began to be produced, which make up the majority of the modern PC market.

In addition to IBM PC-compatible computers, there are other types of computers designed to solve problems of varying complexity in various areas of human activity.

1.11. The evolution of the development of personal computers

The development of microelectronics led to the emergence of microminiature integrated electronic elements that replaced semiconductor diodes and transistors and became the basis for the development and use of PCs. These computers had a number of advantages: they were compact, easy to use and relatively cheap.

In 1971, Intel created the i4004 microprocessor, and in 1974, the i8080, which had a huge impact on the development of microprocessor technology. This company to this day remains the market leader in the production of microprocessors for PCs.

Initially, PCs were developed on the basis of 8-bit microprocessors. One of the first manufacturers of computers with a 16-bit microprocessor was IBM, until the 1980s. specializing in the production of large computers. In 1981, she first released a PC that used the principle of an open architecture, which made it possible to change the configuration of the computer and improve its properties.

In the late 1970s and other large companies in leading countries (USA, Japan, etc.) began to develop PCs based on 16-bit microprocessors.

In 1984, Apple's TIKMacintosh appeared - a competitor to IBM. In the mid 1980s. computers based on 32-bit microprocessors were released. 64-bit systems are currently available.

According to the type of values ​​of the main parameters and taking into account the application, the following groups of computer equipment are distinguished:

▪ supercomputer - a unique ultra-efficient system used to solve complex problems and large calculations;

▪ server - a computer that provides its own resources to other users; there are file servers, print servers, database servers, etc.;

▪ personal computer - a computer designed for use in the office or at home. The user can configure, maintain and install software for this type of computer;

▪ professional workstation - a computer with enormous performance and designed for professional work in a certain area. Most often it is supplied with additional equipment and specialized software;

▪ laptop - a portable computer with the computing power of a PC. It can function for some time without power from the electrical network;

▪ a pocket PC (electronic organizer), no larger in size than a calculator, keyboard or keyboardless, similar in functionality to a laptop;

▪ network PC - a computer for business use with a minimum set of external devices. Operation support and software installation are carried out centrally. It is also used to work in a computer network and to function offline;

▪ terminal - a device used when working in offline mode. The terminal does not contain a processor for executing commands; it only performs operations of entering and transmitting user commands to another computer and returning the result to the user.

The market for modern computers and the number of machines produced are determined by market needs.

1.12. Structure of modern computing systems

In the structure of today's PC such as the IBM PC, there are several main components:

▪ a system unit that organizes work, processes information, makes calculations, and ensures communication between a person and a computer. The PC system unit includes a motherboard, speaker, fan, power supply, two disk drives;

▪ system (motherboard) board, which consists of several dozen integrated circuits for various purposes. The integrated circuit is based on a microprocessor, which is designed to perform calculations on a program stored in a storage device and general control of the PC. The speed of a PC depends on the speed of the processor;

▪ PC memory, which is divided into internal and external:

a) internal (main) memory is a storage device associated with the processor and designed to store used programs and data that are involved in calculations. Internal memory is divided into operational (random access memory - RAM) and permanent (read-only memory - ROM). Random access memory is intended for receiving, storing and issuing information, and permanent memory is for storing and issuing information;

b) external memory (external storage device - ESD) is used to store large amounts of information and exchange it with RAM. By design, the VCUs are separated from the central PC devices;

▪ audio card (audio card), used for playing and recording sound;

▪ video card (video card), which provides playback and recording of a video signal.

External input devices in a PC include:

a) keyboard - a set of sensors that perceive pressure on the keys and close some electrical circuit;

b) mouse - a manipulator that simplifies the work with most computers. There are mechanical, optical-mechanical and optical mice, as well as wired and wireless;

c) scanner - a device that allows you to enter text, pictures, photographs, etc. into a computer in graphical form.

External information output devices are:

a) a monitor used to display various types of information on the screen. Monitor screen size is measured in inches as the distance between the bottom left and top right corners of the screen;

b) a printer used to print text and graphics prepared on a computer. There are dot matrix, inkjet and laser printers.

External input devices are used to make information that the user has available to the computer. The main purpose of an external output device is to present the available information in a form accessible to the user.

Topic 2. Computer technologies for information processing

2.1. Classification and arrangement of computers

A computer (from the English computer - calculator) is a programmable electronic device that is capable of processing information, performing calculations and performing other tasks. Computers are divided into two main types:

1) digital, evaluating data in the form of numerical binary codes;

2) analog, analyzing continuously changing physical quantities, which are analogues of the calculated quantities.

Currently, the word "computer" refers to a digital computer.

Computers are based on hardware (Hardware) formed by electronic and electromechanical elements and devices. The principle of operation of computers is to execute programs (Software) that are predetermined and clearly defined by a sequence of arithmetic, logical and other operations.

The structure of any computer is determined by general logical principles, on the basis of which the following main devices are distinguished in it:

▪ memory consisting of renumbered cells;

▪ processor, which includes a control unit (CU) and an arithmetic-logical unit (ALU);

▪ input device;

▪ Output device.

These devices are connected by communication channels that transmit information.

2.2. Computer architecture

The computer architecture is characterized by the qualities of the machine that affect its interaction with the user. Architecture defines a set of machine properties and characteristics that a programmer needs to know in order to effectively use a computer in solving problems.

In turn, the architecture determines the principles of organization of the computing system and the functions of the central computing device. However, it does not show how these principles are implemented inside the machine. The architecture does not depend on programmatically inaccessible machine resources. If the computers have the same architecture, then any machine code program written for one computer works the same way on another computer with the same results.

To perform its functions, any computer requires a minimum set of functional blocks.

The architecture of today's computers has classic features, but there are some differences. In particular, the storage device (memory) of the first computers of the classical structure was divided into two types:

1) internal, containing information that was processed in it at some point in time;

2) external, which is a repository of all information necessary for the operation of a computer.

In the course of technological progress, the number of levels in the memory hierarchy of computers has increased.

The arithmetic logic unit and the control unit form a single unit called the central processing unit. The list of devices for input and output of data includes various drives on magnetic, optical and magneto-optical disks, scanners, keyboard, mouse, joystick, printers, plotters, etc. The structure of a modern PC contains two main parts: central and peripheral, while it is customary to refer to the central part of the central processor and internal memory.

The central processing unit (CPU) is a device that processes data and performs software control of this process. The central processor consists of an ALU, a control unit, and sometimes the processor's own memory; it is most often implemented in the form of a large integrated circuit and is called a microprocessor.

Internal memory is a device designed to store information in a special coded form.

Random access memory, or random access memory (RAM), is a CPU that interacts with internal storage. RAM is used to receive, store and issue all the information that is required to perform operations in the CPU.

External storage devices are needed to store large amounts of information that is not currently used by the processor. These include: magnetic disk drives, magnetic tape drives, optical and magneto-optical drives.

Virtual memory is a combination of RAM, VZU and a set of software and hardware.

The configuration of a computer is a certain composition of its devices, taking into account their features.

An input operation is the transfer of information from peripheral devices to central ones, an output operation is the process of transferring information from central devices to peripheral ones.

Interfaces are pairings that carry out communication between PC devices in computing.

2.3. Memory in personal computers

The power of a computer depends on its architecture and is determined not only by the clock frequency of the processor. System performance is also affected by memory speed and bus bandwidth.

How the CPU and OP interact depends on the computer's memory and the chipset installed on the system board.

Memory devices are used to store information. Their functions include its recording and reading. Collectively, these functions are referred to as memory access.

One of the most important characteristics of memory is capacity and access time. Most often, the memory includes many identical storage elements. Such elements previously served as ferrite cores, which were combined into a bit memory matrix. Currently, the memory elements of the OP are large integrated circuits (LSI).

When processing information by the processor, it is possible to access any cell of the OP, on the basis of this it is called random access memory, or RAM. Typically, PCs have OP, which is performed on dynamic-type microcircuits, with cells assembled in a matrix.

In static type memory, information is stored on static flip-flops. For static memory, regeneration cycles and reload operations are not applied, i.e., the access time to static memory is much less than to dynamic memory. The speed of the processor is highly dependent on the speed of the operating system used. At the same time, it affects the performance of the entire system. To implement one storage element of dynamic memory, 1-2 transistors are required, for static - 4-6, i.e. the cost of static memory significantly exceeds the cost of dynamic. Based on this, a PC most often uses dynamic type RAM, and to improve system performance, ultra-fast, or cache memory. Super-rapid memory is made on elements of a static type. In this case, the block of data processed by the processor is placed in the cache memory, but the RAM is accessed only when there is a need for data that is not contained in the cache memory. The use of cache memory makes it possible to coordinate the operation of the processor and the operating system on elements of a dynamic type in terms of speed.

Memory integrated circuits are produced in small quantities by Japanese, Korean, American and European companies.

Read Only Memory, or ROM, is designed to store the BIOS, which in turn makes the software invariant to the motherboard architecture. In addition, the BIOS contains the necessary set of I / O programs that ensure the operation of peripheral devices.

In addition to I/O programs, the ROM includes:

▪ POST test program when turning on the computer;

▪ a bootloader program that performs the function of loading the OS from disk.

Due to the declining prices of flash ROMs, BIOS storage elements are used to store information in which information can be erased electrically or using ultraviolet radiation. At the moment, flash memory is most often used for these purposes, which allows you to make corrections to the BIOS.

2.4. The concept of a command and computer system software

Every computer program is a sequence of individual commands. A command is a description of an operation that a computer performs. Usually, an instruction has its own code (symbol), source data (operands) and result. The set of commands that a given computer executes is a system of commands for a given computer.

Computer software is a set of programs, procedures and instructions, as well as technical documentation related to them, that allow using a computer to solve specific tasks.

According to the areas of application, computer software is divided into system and application.

System, or general, software acts as an "organizer" of all computer components, as well as external devices connected to it.

The system software consists of two components:

1) operating system - a whole complex of control programs that are an interface between PC components and ensure the most efficient use of computer resources. The operating system is loaded when the computer is turned on;

2) utilities - auxiliary maintenance programs.

Utilities include:

▪ computer diagnostic programs - check the computer configuration and the functionality of its devices; First of all, hard drives are checked for errors;

▪ disk optimization programs - provide faster access to information stored on the hard drive by optimizing the placement of data on it. The process of optimizing data on a hard drive is better known as the process of disk defragmentation;

▪ Disk cleaning programs - find and delete unnecessary information (for example, temporary files, temporary Internet files, files located in the recycle bin, etc.);

▪ disk cache programs - speed up access to data on the disk by organizing a cache buffer in the computer operating system containing the most frequently used disk areas;

▪ dynamic disk compression programs - increase the amount of information stored on hard drives by dynamically compressing it. The actions of these programs are not noticeable to the user; they appear only through an increase in disk capacity and a change in the speed of access to information;

▪ Packaging programs (or archivers) - pack data on hard drives through the use of special information compression methods. These programs allow you to free up significant disk space by compressing information;

▪ anti-virus programs - prevent infection by a computer virus and eliminate its consequences;

▪ programming systems - a set of programs for automating the process of programming computer scripts.

Application software is a special program that is used to solve certain practical problems. Currently, programmers have developed many applications used in mathematics, accounting and other fields of science.

2.5. Basic Input/Output System (BIOS). Understanding CMOS RAM

The basic input-output system (BIOS) is, on the one hand, an integral part of the hardware, and on the other hand, one of the OS software modules. The emergence of this name is due to the fact that the BIOS includes a set of I / O programs. With the help of these programs, the operating system and application programs can interact with various devices of the computer itself, as well as with peripheral devices.

As an integral part of the hardware, the BIOS system in a PC is implemented as a single chip installed on the computer's motherboard. Most modern video adapters and storage controllers have their own BIOS that complements the system BIOS. One of the developers of the BIOS is IBM, which created NetBIOS. This software product cannot be copied, so other computer manufacturers have been forced to use third-party BIOS chips. Specific BIOS versions are associated with the chipset (or chipset) found on the motherboard.

As an OS software module, the BIOS system contains a POST (Power On Self Test) test program when the computer is turned on. When you run this program, the main components of the computer (processor, memory, etc.) are tested. If the computer is having problems powering up, i.e. the BIOS is unable to complete the initial test, then the error notification will appear as a series of beeps.

The "immutable" memory CMOS RAM stores information about the configuration of the computer (the amount of memory, types of drives, etc.). This is the information that BIOS software modules need. This memory is based on a certain type of CMOS structures (CMOS - Complementary Metal Oxide Semiconductor), which are characterized by low power consumption. CMOS memory is non-volatile, as it is powered by a battery located on the system board, or a battery of galvanic cells mounted on the system unit case.

Changing settings in CMOS is done through the SETUP program. It can be invoked by pressing a special key combination (DEL, ESC, CTRL-ESC, or CRTL-ALT-ESC) during boot (some BIOSes allow you to start SETUP at any time by pressing CTRL-ALT-ESC). In AMI BIOS, this is most often done by pressing the DEL key (and holding it) after pressing the RESET button or turning on the computer.

Topic 3. Hardware and software architecture of IBM-compatible technologies

3.1. Microprocessors

The central processing unit is an integral part of any computer. This is usually a large integrated circuit, which is a silicon crystal in a plastic, ceramic or cermet case, on which there are leads for receiving and issuing electrical signals. The functions of the CPU are performed by microprocessors. They carry out calculations, data transfer between internal registers and control over the course of the computational process. The microprocessor interacts directly with the OP and the motherboard controllers. The main information carriers inside it are registers.

An integral part of the microprocessor are:

▪ ALU, consisting of several blocks, for example, an integer processing unit and a floating point processing unit;

▪ a control device that generates control signals to execute commands;

▪ internal registers.

The operation of each microprocessor unit is based on the pipeline principle, which is as follows. The implementation of each machine instruction is divided into separate stages, and the execution of the next program instruction can be started before the completion of the previous one. Therefore, the microprocessor simultaneously executes several program commands following one after another, and the time for executing a block of commands is reduced several times. A superscalar architecture is an architecture based on the pipeline principle. This is possible if there are several processing units in the microprocessor.

In the program, there may be commands to transfer control, the execution of which depends on the results of the execution of previous commands. In modern microprocessors, when using a pipelined architecture, mechanisms for predicting transitions are provided. In other words, if a conditional jump instruction appears in the instruction queue, then it is predicted which instruction will be executed next before the jump flag is determined. The selected branch of the program is executed in the pipeline, however, the result is recorded only after the transition sign is calculated, when the transition is selected correctly. In case of incorrect selection of the program branch, the microprocessor goes back and performs the correct operations in accordance with the calculated transition sign.

Important characteristics of the microprocessor are:

▪ its performance, which largely depends on the clock frequency of the microprocessor;

▪ microprocessor architecture, which determines what data it can process, what machine instructions are included in the set of commands it executes, how data is processed, and how much internal memory the microprocessor has.

The structure of the microprocessor may include a cache memory (super-operational), providing a faster transfer of information than the OP. There is a first-level cache memory, which is usually built into the same chip and operates at the same frequency as the microprocessor; Second-level cache memory - shared when instructions and data are stored together, and divided when they are stored in different places.

When solving complex mathematical and physical problems, some computers provide for the use of a special device called a mathematical coprocessor. This device is a specialized integrated circuit that works in conjunction with the CPU and is designed to perform floating point mathematical operations.

3.2. System boards. Buses, interfaces

The main electronic part of the PC is structurally located in the system unit. The system unit can be of several sizes and types, for example desktop, tower type. Various computer components inside the system unit are located on the system board, which is called the motherboard.

The motherboard plays a significant role, since the operation of the PC largely depends on its characteristics. There are several types of motherboards that are usually designed for specific microprocessors. The choice of the motherboard largely determines the possibility of future computer upgrades. When choosing a motherboard, consider the following characteristics:

▪ possible types of microprocessors used, taking into account their operating frequencies;

▪ number and type of system bus connectors;

▪ basic fee;

▪ the ability to expand RAM and cache memory;

▪ the ability to update the basic input/output system (BIOS).

The system board contains one or more integrated circuits. They manage communications between the processor, memory, and I/O devices. They are called the system chipset.

Intel 440LX, Intel 440BX are in the greatest demand among microcircuits. The largest motherboard manufacturer is Intel, which has introduced most of the technological and technical innovations for motherboards. However, Intel products are not cheap.

Directly on the motherboard is the system bus, which is designed to transfer information between the processor and the rest of the PC components. With the help of the bus, both the exchange of information and the transmission of addresses and service signals take place.

IBM PC-compatible computers originally used a 16-bit bus running at 8 MHz. After the advent of new microprocessors and high-speed peripherals, a new standard was proposed - the MCA bus with a higher clock speed. It contained arbitration functions to avoid conflict situations when several devices work together. This bus has increased throughput and achieved greater compactness, and the bus width is MCA-16 and 32.

In 1989, the EISA bus was developed, which actually became an add-on to ISA. This bus was mainly used in high-performance servers and professional workstations with high performance requirements.

Since 1991, so-called local buses have been used to increase system performance. They connected the processor directly to the controllers of peripheral devices and thus increased the overall speed of the PC. Among the local buses, the most famous is the VL-bus, which was focused on PCs with microprocessors of the i486 family, although it can also work with Pentium processors.

The processor-independent PCI bus operates at a clock frequency of 33 MHz and has a high data transfer rate. Especially for this bus, many adapters for peripheral devices have been released - video cards, disk controllers, network adapters, etc.

To work with graphic and video data, the AGP bus was developed, which is faster than PCI. The AGP bus directly connects the graphics adapter to the PC's RAM, and this is very important when working with video, two- and three-dimensional applications; It operates at a frequency of 66 MHz.

Peripherals are connected to the system bus using controllers or adapters. Adapters are special boards that are different for different types of peripherals.

3.3. External device controls

External devices provide input, output and accumulation of information in the PC, interact with the processor and operating system through the system or local bus, as well as through input-output ports. They are located both outside the system unit (keyboard, mouse, monitor, printer, external modem, scanner) and inside it (disk drives, device controllers, internal fax modems). Often, external devices are called peripheral, although in the narrow sense the term "peripheral" means a part of the devices that provide input and output of information (keyboards, pointers, scanners, printers, etc.).

Most external devices for IBM-compatible PCs are controlled by controllers that are installed in the expansion slots on the motherboard. A controller is a board that controls the operation of a particular type of external devices and ensures their communication with the system board. Most controllers are system expansion cards, with the exception of port controllers and floppy and hard disk drives that are built directly into the motherboard. In early IBM compatible PCs, these controllers were usually placed on a separate board called a multiplato or multicard. Sometimes other controllers are built into the motherboard in laptop computers, including video adapters and sound cards.

Expansion boards, called daughter boards, are installed on the motherboard. They are designed to connect additional devices to the PC bus, and the motherboard usually has 4 to 8 expansion slots. In accordance with the bitness of the processor and the parameters of the external data bus of the motherboard, they are 8-, 16- and 32-bit.

Daughterboards are divided into two types:

1) full-size, i.e. the same length as the motherboard;

2) half-sized, i.e., two times shorter.

Any daughter boards can be installed in the expansion slots if they are compatible with the bus in terms of control, bitness and power supply.

The serial port transmits information one bit at a time, and devices such as a mouse, external modem, and plotter are connected through the serial ports.

The most important types of expansion boards are:

1) video adapters (required for the normal functioning of the PC);

2) internal modems (required to use internal modems);

3) sound cards (designed for multimedia systems);

4) LAN adapters (required when using a computer in a local area network environment).

In addition to the above, other types of expansion cards are used:

▪ scanner control;

▪ streamer management;

▪ SCSI interface;

▪ virtual reality device controllers;

▪ ADC;

▪ barcode reading devices;

▪ Light pen control;

▪ connections with mainframe computers;

▪ accelerator boards.

The PC has special I / O controllers, which is implemented through the I / O ports.

The serial port transmits information one bit at a time, while the parallel port transmits information byte by byte. Serial ports connect devices such as a mouse, external modem, and plotter.

3.4. Information accumulators

A device designed for long-term storage of significant amounts of information is called a drive or an external storage device, a mass storage device.

Depending on the location in the PC, drives are distinguished:

1) external, which are outside the system unit and have their own case, power supply, as well as a switch and cable;

2) internal, which are located on the mounting rack of the computer system unit. These devices do not have their own housing and are connected to the storage controller and PC power supply.

According to the recording method, random access devices and sequential access devices are distinguished.

The main types of disk drives are:

▪ floppy disk drives;

▪ hard magnetic disk drives (HDD), hard drive;

▪ storage devices on removable CDs.

In floppy disk drives (floppy disks), information is recorded along tracks, divided into separate sectors. There are inter-sector gaps between these sectors. Depending on the type of device and media and the method of marking the latter, the number of tracks and sectors and the sector size are selected.

The principle of operation of such drives is that the diskette, which is installed in the drive, rotates at a speed of 300-360 rpm, which provides access to the desired sector. Writing special control information to a disk is called formatting.

Hard disk drives are several metal disks that are placed on the same axis and enclosed in a sealed metal case. These discs must be formatted before use. On hard disks, information is located on tracks, and inside tracks - on sectors. A set of tracks on a package of magnetic disks with the same numbers is called a cylinder.

Among the main characteristics of HDD are:

▪ information capacity;

▪ recording density;

▪ number of tracks;

▪ access time (milliseconds);

▪ external overall dimensions;

▪ storage devices on rewritable CDs;

▪ storage devices on high-capacity removable magnetic disks;

▪ Magneto-optical disk drives.

Such drives are connected to the system bus using various types of interface, including connection elements and auxiliary control circuits needed to connect devices.

Removable CD drives are used when using multimedia systems. These drives (CD-ROM) are adapted to read information from CDs containing up to 700 MB. Recording on such discs is carried out once using special equipment.

CD-RW drives, unlike CD-R drives, allow multiple rewrites.

High-capacity removable magnetic disk drives are designed to record up to 200 MB of information or more on a removable disk.

Drives on magneto-optical disks use an original scheme for reading and writing information, which ensures high information capacity of the media and the reliability of storing the recorded information. Recording on these media is carried out for a long time, and reading is fast enough.

Devices for writing and reading digital information on a magnetic tape cassette are called streamers. They are tape drives. They are used for backup archiving of information. Among the positive qualities of such records are large amounts of stored information and low cost of data storage.

3.5. Video controllers and monitors

Devices that display information on the monitor screen are called video adapters, or video controllers. The video controller is an expansion card that provides the formation of an image on the monitor screen using information that is transmitted from the processor.

Video controllers are connected to a PC using special local PCI or AGP buses. The AGP interface is used to speed up data exchange between the processor and the video card. Many video cards are designed to be connected to the motherboard via the AGP connector.

The information is displayed in text or graphics mode. Text mode uses a character-by-character image of the data on the monitor screen, and the image data is stored in ROM. Images after turning on the power of the computer are overwritten from ROM to the OP. When working in graphics mode, a point-by-point display of information on the screen is used, with each point of the screen being modeled by a number of bits that characterize the color of each of the displayed points. In VGA mode, each dot is specified by a sequence of four bits, so each dot can be displayed in one of 16 = 24 possible colors. Graphic screen modeling can be done with different sets of points, both vertically and horizontally.

Modern video adapters are called graphics accelerators, as they have special chips that allow you to speed up the processing of large amounts of video data. Also, these graphics accelerators are called accelerators, they have their own specialized microprocessor and memory. The size of this memory is important, since it forms a complete graphical bitmap of the screen. In the process of its work, the video adapter uses its own memory, but not operational.

However, for high-quality image reproduction, it is not enough to have the video memory of the required amount. It is important that the monitor be able to output in high resolution modes and that the software that sets up the imaging can support the appropriate video mode.

Desktop computers use cathode ray tube monitors, liquid crystal monitors (LCD), and less commonly plasma monitors.

When working in graphical environments, monitors with a screen size of at least 15-17 inches should be used. Among the main parameters of monitors are:

▪ maximum resolution;

▪ length of the diagonal;

▪ distance between pixels;

▪ frame rate;

▪ degree of compliance with environmental safety standards.

An image is considered to be of better quality if the distance between pixels is minimal and the frame rate is high. At a frequency of at least 75 Hz, the level of image comfort for the eye is ensured. The ideal refresh rate is 110 Hz, at which the image is perceived as completely still. The frame rate is not a fixed value, i.e. when working with a higher resolution, the same monitor uses a lower frame rate. The type of video adapter used also affects the image quality, since inexpensive models may not support the appropriate frequency.

Personal computers use LCD and TFT displays, as well as displays with dual screen scanning. TFT displays are the most promising, but quite expensive. The resolution of TFT displays is 640x480, and in more expensive portable PCs - 800x600 pixels and less often 1024x768.

3.6. Input devices

The main standard input device in a PC is the keyboard. In its case there are key sensors, decryption circuits and a microcontroller. Each key corresponds to a specific serial number. When a key is pressed, information about this is transmitted to the processor in the form of an appropriate code. This code is interpreted by the driver - a special program that accepts characters entered from the keyboard.

There are keys on the keyboard that do not send any code to the processor and are used to switch the state of special keyboard status indicators.

To save space, laptops and pocket PCs use keyboards with a small number of keys.

The layout of the keys on the keyboard corresponds to the standard of Latin typewriters.

Coordinate manipulators are coordinate input devices. These include mice, trackballs and pointers.

The mouse is connected to the computer via a serial port. When the mouse is moved, information about the type of this movement is transmitted to the driver, which changes the location of the mouse cursor on the screen. Thanks to this, it is possible to inform the application program of the current values ​​of its coordinates. The mouse plays a special role when working with graphic information in graphic editors, computer-aided design systems. The most commonly used are the left and right mouse buttons. Usually, programs track single and double clicks of the left mouse button, as well as single clicks of the right mouse button.

The trackball is a ball built into the keyboard, which differs from the mouse in that it does not need to be moved around the work surface.

The pointer is an analogue of the joystick and is placed on the keyboard.

Trackballs and pointers are most often used in laptop computers, while PDAs use a touch screen as a coordinate input device.

Scanners are devices for entering graphic information into a computer. There are manual, flatbed and roll scanners; black and white and color.

Using a handheld scanner, it is necessary to move it along the surface of the sheet from which the image is taken. Separate image elements can be entered in parts and combined in the required sequence using special programs.

Flatbed scanners are easy to use, more productive than manual scanners, and more expensive. When working with such scanners, an unfolded book is placed on the scanner tablet, and it reads the entire sheet on its own. These scanners have a high resolution, so they are used to enter photographs and complex illustrations into a PC.

Roll scanners are also easy to use and designed for continuous reading of information from roll media, for example, when analyzing experimental data.

Scanners can be divided into black and white and color. Black and white scanners are mainly used for scanning text information, and color scanners for graphics.

Digitizers are devices for point-by-point coordinate input of graphic images that are used in automatic design systems, computer graphics and animation. This device allows you to enter complex images, such as drawings, maps, etc., with great accuracy.

By assembly, the digitizer is a tablet containing a work plane with a coordinate grid applied to it. It has a control panel and a special light pen connected to a tablet. The digitizer is connected to the computer by a cable through the port.

3.7. Information output devices

Printing devices include printers that print text and graphics on paper, film, and other media. Printers connect to a computer using a parallel or USB port, and multiple printers can be connected to a computer at the same time. Network printers are called printers that have increased productivity, capable of simultaneously servicing several computers connected to it in a general queue.

There are petal, thermal, specialty, dot matrix, inkjet and laser printers.

Flap and thermal printers are now rarely used, special printers are used to print on the surfaces of parts, fabric, glass, etc. The most commonly used dot matrix, inkjet and laser printers.

Dot matrix printers consist of a print head that moves along the paper; in the head are thin rods that move with the help of an electromagnet. The "ejection" of a certain combination of needles hits the ink ribbon, which imprints on paper the image of a certain set of dots. With a sequential set of printed dots, the outline of a particular character is obtained. Dot matrix printers are distinguished by the width of the carriage: "wide" printers are used when printing on A3 paper, and "narrow" printers are used on A4 paper.

Printing in dot-matrix printers is carried out in the following modes:

▪ draft - low-quality printing;

▪ NLQ - high-quality printing;

▪ graphic.

Most often, dot matrix printers have the following set of font sizes:

▪ pica - 10 characters/inch;

▪ Elite - 12 characters/inch;

▪ proportional spacing - proportional, when the width of different letters is not the same, as a result there may be a different number of them per inch.

In addition to black and white, color dot matrix printers are also used.

Inkjet printers, unlike dot matrix printers, do not use the principle of printing needles. Instead, they use the ejection of microscopic ink droplets through the nozzles of the printer head. This greatly improves the speed and quality of printing in graphic modes.

Of the color printers, the most common are three- and four-color printers, and the cheapest are printers with one cartridge used at a time.

Laser printers differ from others in that the image in them is formed by a laser beam on a light-sensitive drum inside the printer. In the place where the beam illuminates the surface of the drum, an electrical discharge is formed that attracts dust particles of dry paint. After the drum touches the paper, the toner melts and leaves a dot impression on the paper, forming an image.

Laser printers have high print quality and high speed, but they are more expensive than other printers.

Plotters, or plotters, are devices that are used to draw complex graphics. Plotters can be of two types: flatbed and roll. The sheet in the plotter is fixed as on a drawing board, and the drawing pen moves in two coordinates along the entire sheet. In a roll-type plotter, the drawing pen moves only along the sheet, and the paper is pulled back and forth by a transport roller, so roll-type plotters are much more compact.

3.8. Information transfer devices. Other peripherals

A device that converts information as it is transmitted between computers over the telephone network is called a modem.

The basis of this process is the conversion of data received from the processor from digital form into a high-frequency analog signal.

There are modems:

▪ internal, which is an expansion card that is installed in one of the free expansion slots on the system board;

▪ external, connected using a special connector to the PC serial port.

One of the most important characteristics of a modem is the maximum data transfer/reception rate it provides, which is measured in bauds (a unit of data transfer rate, measured in bits per second). Currently, modems operate at a maximum speed of 28 kbaud and higher.

The fax modem has the functions of receiving and transmitting fax messages. Most often, modern modems are fax modems, and therefore the terms "modem" and "fax modem" are considered synonymous.

Currently, devices are used that can simultaneously transmit data and voice over telephone lines based on DSVD technology. The most common modems in Russia are USRobotics, ZyXEL, GVC.

The computer's power supply is turned off in emergency situations. Approximately 80% of computer failures are the result of a power failure, so an uninterruptible power supply (UPS) is used to keep it safe from power surges or power outages.

The uninterruptible power supply unit contains a voltage stabilizer, built-in rechargeable batteries and an alternator. In the event of a power failure, this device switches over the voltage to itself and provides the computer with power for some time, which ensures stable operation of the computer. This device is able to maintain normal PC power for 3-20 minutes.

An interactive computer system that provides the synthesis of text, graphics, sound, speech and video images is called multimedia. A multimedia system is a computer, the main devices of which meet modern requirements. Such a computer must be equipped with a CD drive, sound card, speakers or headphones. The CD is one of the main storage media in multimedia systems; encyclopedias, games and educational programs are recorded on it. CDs are sometimes more convenient than books, finding the information you need through the use of special software becomes easier and faster.

Audio adapters are used to play, record and process sound, such as sound cards and sound cards. These devices convert computer digital data into an analog audio signal and vice versa; the sound card houses several different devices that allow you to create a recording studio based on a PC. The main characteristics of audio adapters include: bit depth, the number of playback channels (mono or stereo), the synthesis principle used, expandability and compatibility. The sound quality also depends on the type of sound cards and acoustic systems. Sufficient sound quality is provided by any active speakers, and better sound is achieved by connecting an audio card to the amplifier input of a home audio system.

Topic 4. Basics of user work in the operating environment of a personal computer

4.1. Operating Systems

The operating system is a whole series of control programs that are used as an interface between PC components and provide the most efficient implementation of computer resources. The operating system is the basis of the system program that is loaded when the computer is turned on.

The main functions of the OS include:

▪ Receiving commands or tasks from a PC user;

▪ accepting and applying program requests to start and stop other programs;

▪ Loading programs suitable for execution into the OP;

▪ protecting programs from mutual influence on each other, ensuring data safety, etc.

According to the types of user interface (a set of techniques that ensure the interaction of PC users with its applications), the following operating systems are distinguished:

a) command interface - issuing a system prompt to the monitor screen for entering commands from the keyboard (for example, MS-DOS OS);

b) WIMP interface (or graphical interface - a graphical representation of images that are stored on a hard disk (for example, Windows OS of various versions);

c) SILK interface (Speech Image Language Knowledge) - the use of speech commands for interaction between a PC user and applications. This type of OS is currently under development.

According to the task processing mode, the following operating systems are distinguished:

a) providing a single-program mode, i.e. a method of organizing calculations in which at one time they are able to perform only one task (for example, MS-DOS);

b) working in multiprogram mode, when organizing calculations on a single-processor machine creates the appearance of executing several programs.

The difference between multiprogramming and multitasking modes is that in multiprogramming mode, several applications are executed in parallel, while the user does not need to take care of organizing their work, these functions are taken over by the OS. In multitasking mode, parallel execution and interaction of applications must be provided by application programmers.

In accordance with the support of the multi-user mode, the OS is divided into:

a) single-user (MS-DOS, early versions of Windows and OS / 2);

b) multiuser (network) (Windows NT, Windows 2000, Unix).

The main difference between a multi-user OS and a single-user OS is the availability of means to protect each user's information from illegal access by other users.

4.2. Software classification

Software is a set of programs and related documentation that is designed to solve problems on a PC. It is of two types: systemic and applied.

System software is designed to control a computer, create and support the execution of other user programs, and provide the user with all kinds of services.

Application software is a set of programs that allow you to perform specific operations.

Software is usually divided into operating systems, service systems, software tools and maintenance systems.

The operating system manages the operation of all PC devices and the process of executing application programs and monitors the health of the PC hardware, boot procedure, file system management, user interaction with the PC, loading and executing application programs, allocating PC resources, such as RAM, CPU time and peripherals between application programs.

Currently, instead of the OS of the DOS family, the OS of the new generation is used, the main distinguishing features of which are:

▪ multitasking - the ability to ensure the execution of several programs simultaneously;

▪ developed graphical interface;

▪ use of microprocessors;

▪ stability in work and security;

▪ absolute independence from equipment;

▪ compatibility with all types of applications developed for MS DOS.

Service systems provide the OS with more opportunities and provide the user with a set of various additional services. This type of system includes shells, utilities, and operating environments.

An OS shell is a software product that makes the user's communication with the computer more comfortable.

Utilities are utility programs that provide the user with some additional services,

The disk check program is designed to check the correctness of the information contained in the disk file allocation tables and search for bad disk blocks.

A disk compactor (or disk defragmenter) is used to create and maintain compressed disks. A compacted disk is a file on a conventional physical floppy or hard disk that is compressed when written and recompressed when read.

The disk backup program is designed to work in three modes: backup, recovery and comparison of source data with their backups.

Archivers include programs that can significantly reduce the "volume" occupied by a particular document. Archivers are used to save memory space.

The System Monitor program is used to analyze the peak usage of the processor and other resources.

Antivirus programs are integrated tools for detecting and eliminating computer viruses.

Software tools are software products used to develop software.

Maintenance programs are used to control the operation of various computer systems, allow you to monitor the correct functioning of it, as well as perform diagnostics.

4.3. Purpose of operating systems

The appearance of the computer system depends on the type of OS, consisting of processors, memory, timers, various types of disks, magnetic tape drives, printers, network communication equipment, etc. The operating system is used to manage all the resources of the computer, ensuring maximum efficiency of its functioning. The main function of the OS is the distribution of processors, memory, other devices and data between computing processes that compete for these resources. Resource management includes the following tasks:

1) resource planning, i.e. determining to whom, when and in what quantity it is necessary to allocate this resource;

2) control over the state of the resource, i.e. maintaining operational information about whether the resource is occupied or not, how much of the resource has already been distributed, and how much is free.

Operating systems are classified according to the features of the implementation of computer resource management algorithms, areas of use, and many other features.

4.4. Evolution and characteristics of operating systems

Tube computing devices were created in the mid-1940s. At that time, the OS was not used, all tasks were solved manually by the programmer using the control panel.

In the mid 1950s. semiconductor elements were invented and began to be used, in connection with this, the first algorithmic languages ​​\uXNUMXb\uXNUMXband the first system programs - compilers, and then the first batch processing systems appeared. These systems became the prototype of modern operating systems and were the first system programs for managing the computing process.

The period from 1965 to 1980 saw a transition to integrated circuits.

The advent of LSI led to a sharp reduction in the cost of microcircuits. The computer became available to an individual, which led to the onset of the era of the PC.

For the mid 1980s. characterized by the development of PC networks running networked or distributed operating systems.

The operating system is the main part of the network software, it provides the environment for applications to run and determines how efficiently they will work. The main requirement for modern operating systems is the ability to perform fundamental functions, in particular, efficient resource management and provide a convenient interface for the user and application programs. The operating system is designed to implement multiprogram processing, virtual memory, support a multi-window interface, etc. In addition to functional, market requirements are also imposed on the OS.

1. Extensibility. The system should be written in such a way that it can be easily added and changed without violating its integrity.

2. Portability. Without much difficulty, the OS should be ported from one type of hardware to another type of hardware.

3. Reliability and fault tolerance. The operating system must be protected from internal and external errors, failures and failures; its actions should be predictable, and applications should not destroy it.

4. Compatibility. The system must have the means to run application programs written for other operating systems. The user interface of the system must be compatible with existing systems and standards.

5. Safety. The system must have means of protecting the resources of some users from others.

6. Performance. The system should be as fast as the hardware allows.

The network OS is evaluated according to the following criteria:

▪ the ability to share files and printers with high performance;

▪ effective execution of application programs oriented to the client-server architecture, including application programs from manufacturers;

▪ availability of conditions for working on various platforms and with various network equipment;

▪ ensuring integration with the Internet, i.e. support for relevant protocols and Web server software;

▪ remote network access;

▪ organization of internal email, teleconferences;

▪ access to resources across geographically dispersed, multi-server networks using directory and naming services.

4.5. Operating system of new technologies

An example of a new operating system is Microsoft Windows NT, which is a fast 32-bit networking system with a graphical user interface and built-in networking tools. This OS is network oriented.

In order to communicate between remote sites using a remote access service, modems are required at both ends of the connection, printers, tape drives, and other devices.

The Windows NT operating system has the features listed below.

1. Portability, i.e. the ability to work on CISC and RISC processors.

2. Multitasking, i.e. the ability to use one processor to run multiple applications or threads.

3. Multiprocessing, which includes multiple processors capable of simultaneously executing multiple threads, one for each processor in the computer.

4. Scalability, i.e. the ability to automatically use the positive qualities of the added processors. For example, to speed up the application, the OS can automatically connect additional identical processors. Windows NT scalability is provided by:

▪ multiprocessing of local computers, i.e. the presence of several processors, interaction between which occurs through shared memory;

▪ symmetric multiprocessing, which involves simultaneous execution of applications on several processors;

▪ distributed information processing between several networked computers, implemented based on the concept of remote procedure calls, which supports the client-server architecture.

5. Client-server architecture that connects a single-user workstation and multi-user general-purpose servers (to distribute the data processing load between them). This interaction is object-oriented; the object sending the message is the client, and the object receiving the message is the server.

6. Object architecture. Objects are directory, process and thread objects, memory section and segment objects, port objects. An object type includes a data type, a set of attributes, and a list of operations that can be performed on it. Objects can be managed using OS processes, i.e., through a certain sequence of actions that define the corresponding program and make up the task.

7. Extensibility, which is due to an open modular architecture that allows you to add new modules to all levels of the OS. The modular architecture facilitates connectivity with other networking products, and computers running Windows NT are able to interact with servers and clients from other operating systems.

8. Reliability and fault tolerance, determined by the fact that the architecture protects the OS and applications from destruction.

9. Compatibility, i.e. the ability of Windows NT version 4 to support MS DOS, Windows 3.x, OS/2 applications and have a wide range of devices and networks.

10. Domain architecture of networks, which predetermines the grouping of computers into domains.

11. A multi-level security system that was created to ensure the security of the OS, applications, information from destruction, illegal access, unprofessional user actions. It works at the level of the user, local and network computers, domains, objects, resources, network transmission of information, applications, etc.

4.6. WINDOWS NT architecture

The Windows NT operating system has a modular architecture.

The first module - user mode - allows the user to interact with the system. This level includes environmental subsystems and a security subsystem. A set of instrumental subsystems that support different types of user programs is called an environment subsystem. These subsystems include NT-32, which supports 16- and 32-bit Windows and DOS applications, a subsystem that controls the Windows NT user interface, and others. The security subsystem provides legal user login to the system.

The second module - kernel mode - ensures the safe execution of user applications. At this level, three enlarged modules are distinguished: executing services, the kernel, and the level of hardware abstractions.

The interaction between the subsystem core and environment subsystems is carried out by executing services consisting of a system service and a kernel mode service. A system service is an interface between application environment subsystems and kernel-mode services. The kernel-mode service consists of the following software modules:

▪ input/output manager, which allows you to manage information input/output processes;

▪ object manager, which manages system operations that are performed on objects (use, rename, delete, protect an object);

▪ security control manager, which guarantees the security of the system;

▪ means of calling local procedures that support the operation of user applications and environment subsystems and ensure the exchange of information;

▪ virtual memory manager, which is a service that manages physical and virtual memory;

▪ process manager, which regulates the actions of processes (creation, deletion, logging); distributing address space and other resources between processes.

All system processes are controlled by the Windows NT kernel, which is also responsible for the optimal operation of the system.

The part of the system that ensures the independence of the upper levels of the OS from the specifics and differences of specific hardware is called the hardware abstraction layer. This module contains all hardware-specific information.

The graphical user interface is designed to create a comfortable environment for the user when working with Windows NT. This interface is clear, simple, convenient when launching programs, opening and saving files, working with files, disks and network servers. The GUI in Windows NT is based on an object-oriented approach. The work of the user in this approach is focused mainly on documents, and not on programs. Loading any document is carried out by opening the file that contains this document, while automatically loading the program with which the file being opened was created.

The Windows NT user interface contains the following elements: "Desktop"; "Task bar"; "Start menu"; "Context menu"; "Windows NT application menu system"; shortcuts: "My Computer", "Network Places", "Recycle Bin", "Internet Explorer", "Inbox", "Portfolio"; "Window"; "Fonts"; "Windows NT Help System". The desktop includes shortcuts depicting programs, documents, and devices. Shortcuts allow you to quickly access programs, folders, documents, devices on your computer or network.

4.7. WINDOWS NT installation

The installation is designed to resolve issues in the sequence below.

1. Selecting the file system to be used. If you are installing Windows NT Server, you must decide whether to use the domain model or the workgroup model. During installation, you need to specify the role played by the Windows NT Server machine: primary or backup domain controller, file server, printer, or application server.

2. Formation of a set of required protocols installed by default. If you select the Express Setup installation type, you can install other protocols later.

3. Preparation of a given password.

4. Selecting the type of network card used, the type of disk adapter, the configuration of the sound card.

5. Determining the type and model of the printer and the port of its connection while installing Windows NT and printer drivers.

6. Testing equipment for serviceability using diagnostic tests.

7. Checking the compatibility of all computer devices with Windows NT

During the installation of a Windows NT system, the installation program prompts you for the installation options you want to install on your hard drive, then copies the files that you use, creates, and displays a start menu.

Windows NT installation can be:

▪ initial, if no system was previously installed on the computer or the existing OS needs to be completely replaced;

▪ upgradable, when Windows NT is installed over a previous version while preserving the existing OS. This replaces all existing Windows NT files and preserves the registry settings, the data of which is associated with application loading and security identifiers.

Windows NT installation begins by launching the winnt.exe utility, which is a 16-bit application that runs in DOS, Windows NT, etc. In case of an update, the 32-bit version of this file, winnt32.exe, is launched.

There are several ways to install Windows NT:

▪ from an HCL-compatible CD-ROM using boot disks;

▪ CD, if there is an OS without using boot disks;

▪ a storage device that is accessible on the local computer network.

If the CD-ROM is an HCL-compliant device, Windows NT is installed using boot diskettes.

When the computer has a previously installed OS and the CD-ROM is not an HCL-compliant device, the contents of the corresponding folder are copied to the hard disk. Using the key, the installer copies files to the hard disk from any other medium, except for bootable disks. These files will be launched after the computer is restarted.

With the support of a network card and Windows NT network protocols, it is possible to run the installation program without using additional keys. The files and distribution directories can be located on the server's CD-ROM or hard drive. If the network card or protocol is not supported by Windows NT, then the entire distribution directory should be copied to the computer's hard drive.

If any of the OS was not previously installed on the computer, then a boot disk for the user can be created using the Windows NT Server Client Administrator Utility. This disk initiates the DOS boot, and it becomes possible to copy the distribution files to the disk.

4.8. Registry and configuration of the WINDOWS NT operating system

The main information about the composition of the Windows NT system is located in the registry (a special database), which contains information about: installed programs, libraries and drivers; about links between documents and programs in which they were formed; parameters that control the operation of computers connected to local or global networks.

When using the registry, it is possible to modify the OS configuration. The same result can be obtained using the user interface, for example through the control panel. The registry reflects all changes, but before making changes to it, you should make a backup copy of the system and print its main elements. The registry can be edited by a user registered in the Administrator group.

Information about the local system is located in the following subsections:

1) SYSTEM (system) - information related to starting the system, loading device drivers;

2) Hardware (hardware) - information about installed hardware, displays their current state;

3) Software (software) - information about software settings;

4) Security Account Manager SAM (security account manager) - information about the local user, group accounts and domain value;

5) SECURITY - information about the protection used by the security system of this computer.

With this architecture of the registry, it becomes possible for Windows NT to maintain a universal store for all information and provide distributed but secure access to it over the network. The total size of Windows NT 4 registry files is limited to 2 GB or unallocated disk space on the system volume. The ability to replace the characteristics and values ​​of subsections and registry keys allows you to change the Windows NT OS, in particular:

▪ increase the speed of the desktop by setting the number of icons stored in memory and the cache file;

▪ Vary the number, size and color of icons that are displayed on the screen, and other OS shell settings;

▪ replace Windows Explorer with a program manager or another shell;

▪ change the appearance of standard icons on the desktop and in the start menu.

To select a different type of system service, device driver, or file driver, you must set the desired options in the appropriate registry key.

The registry allows you to increase the efficiency of working with memory, namely, to improve the use of physical and virtual memory in Windows NT. This can be done by increasing the size of the file cache.

Using the registry helps you manage many networking components, but not all networking services may run on your system. Using utilities, you can identify active components and place them at the top of the list of network access components, which will lead to a significant increase in system performance. The same program determines the level of occupancy of the OP, and if there is not enough memory, it can change the number of users accessing the server.

With a large number of requests, it is possible to change the number of threads. Increasing this value improves system performance.

Utilities and appropriate protocols are used to install and configure remote access. The same utility is used to configure port usage.

4.9. Features of the WINDOWS 2000 operating system

The Windows 2000 software product can be used in desktop PCs and server clusters with symmetric multiprocessing. The process of such processing is supported by a storage subsystem with a capacity of millions of terabytes, and an RAM with a capacity of hundreds of gigabytes. The Windows 2000 operating system includes four network operating systems that are focused on solving various types of user tasks:

1) Windows 2000 Professional - a network OS designed for office and mobile PCs. This system is an improved version of Windows NT Workstation 4.0 and has increased reliability and security;

2) Windows 2000 Server is a universal network operating system supported by 4-processor servers and 4 GB of RAM, aimed at small and medium-sized organizations. Windows 2000 Server takes the best features of Windows 2000 Server 4.0 and sets a new standard for reliability, OS integration, directory services, applications, Internet networking, print services, and file access;

3) Windows 2000 Advanced Server is a specialized OS supported by 8-processor servers and 8 GB RAM. Used to work as an application server, Internet gateway, etc.;

4) Windows 2000 Datacenter Server - a system that supports 32-processor architectures and 64 GB RAM. Used to solve resource-intensive tasks, it is able to solve all the tasks of Windows 2000 Advanced Server and problems that require a high level of scalability.

The scalability and performance of the Windows 2000 system is great compared to the rest, which is achieved due to the expansion of the physical address space, which allows the processor to address 64 GB of RAM; support for 32-processor systems; the use of special software settings when reserving and blocking memory, which reduce the competition between processors for resources, etc.

Windows 2000 is enhanced with tools such as Advanced System Restore, Driver Incompatibility Repair Wizard, and Component Manager to make the administrator's job easier and more secure.

The principle of reducing the time of unplanned downtime of the system to zero, in case of their occurrence, i.e., maximum assistance to the administrator in identifying these causes, is implemented in Windows 2000. For this purpose, mechanisms for increasing reliability are built into the system and administrators are given new tools to restore the system in case of failures .

If the failure is caused by the installation of incorrect drivers, then the administrator must boot in safe mode, that is, select one of four possible boot modes: standard, network, command line, or restore active directory service.

In safe mode, the administrator can verify the correctness of any drivers, and he can change the default values ​​of driver and service key parameters in the configuration registry branches that define them.

Another system recovery tool is the recovery console, used when booting from a CD or boot floppies to restore the system or replace corrupted system kernel files.

4.10. Network operating systems

The network operating system (Network Operation System - NOS) is a set of operating systems of individual computers that contact each other in order to exchange information and share resources according to uniform rules (protocols). In addition, such a system is the OS of a separate workstation, which provides it with networking.

The network OS contains the following tools:

1) management of local PC resources (for example, distribution of OP between running processes);

2) provision of own resources and services for general use (server part of the OS);

3) requesting access to remote resources and services, as well as their use (the client part of the OS);

4) messaging in the network (communication means).

Any network OS must manage resources efficiently, provide a convenient multi-window user interface, etc. Since the 1990s. Some standard requirements began to be imposed on network operating systems:

▪ ability to expand;

▪ portability;

▪ sufficient reliability;

▪ compatibility;

▪ security;

▪ productivity.

Depending on the functions assigned to network operating systems, they are divided into systems designed specifically for peer-to-peer networks and systems for networks with a dedicated server. Server computers should use operating systems that are optimized for certain server functions. Therefore, in networks with dedicated servers, network systems are often used, which consist of several OS options that differ in the capabilities of the server parts.

According to the scale of networks served, network operating systems are divided into the following types:

1) networks of departments that form a small group of employees of a particular enterprise or organization. The main task of such a system is the process of sharing local resources;

2) campus-level networks, which combine several networks of enterprise departments within a separate building or one territory into a single local area network. The main function of such systems is to provide access to employees of some departments to information and resources of networks of other departments;

3) corporate networks (or enterprise networks), which include all local networks of an individual enterprise located in different territories. Corporate networks are global computer networks. Operating systems at this level must support a wider set of services.

4.11. UNIX family of operating systems

The UNIX (Uniplex Information and Computing Services) system project was created by K. Thompson and D. Ritchie at AT&T's Bell Labs more than 20 years ago. The OS they developed was implemented in assembler. Initially, Bell Labs employee B. Kernigan called this system "UNICS". However, it soon became known as "UNIX" for short.

In 1973, D. Ritchie developed the C (C) high-level programming language, and UNIX was soon rewritten in this language. After the publication of D. Ritchie and K. Thompson in 1974 in the journal CACM, the UNIX system began to be used everywhere.

The main problem of the UNIX family OS is the incompatibility of different versions. Attempts to standardize versions of UNIX ended in failure, as two incompatible versions of this system were most widely used: the AT & T line - UNIX System V and the Berkeley line - UNIX BSD. Many companies based on these versions developed their own versions of UNIX: SunO- and Solaris from Sun Microsystems, AIX from IBM, UnixWare from Novell, etc.

One of the latest versions of UNIX System V Release 4 brought together the best features of the UNIX System V and UNIX BSD lines, but this version of the system is incomplete, as it lacks the system utilities necessary for successful use of the OS.

Common features for any UNIX OS are:

1) multi-user mode with a method of protecting data from unauthorized access;

2) implementation of multiprogram processing in the time-sharing mode, which is based on the use of preemptive multitasking algorithms; increasing the level of multiprogramming;

3) unification of input-output operations based on the extended use of the concept of "file";

4) a hierarchical file system that forms a single directory tree regardless of the number of physical devices used to place files;

5) portability of the system, which is carried out by writing its main part in the C language;

6) various means of interaction between processes, for example, through a network;

7) disk caching to reduce the average file access time.

4.12. Operating system Linux

The Linux OS is based on the project of L. Torvald, a student at the University of Helsinki, which uses the Minix program. L. Thorvald developed an efficient PC version of UNIX for Minix users and called it Linux. In 1999, he released version 0.11 of Linux, which went viral on the Internet. In subsequent years, this OS was finalized by other programmers who put into it the features and features inherent in standard UNIX systems. After some time, Linux became one of the most popular UNIX projects of the late XNUMXth century.

The main advantage of the Linux OS is that it can be used on computers of any configuration - from desktop to powerful multiprocessor servers. This system is able to perform many of the functions traditional for DOS and Windows, such as file management, program management, user interaction, etc. The Linux system is especially powerful and flexible, giving the computer the speed and efficiency of UNIX, while using all advantages of modern PCs. At the same time, Linux (like all versions of UNIX) is a multi-user and multi-tasking operating system.

The Linux operating system has become available to everyone, as it is a non-commercial project and, unlike UNIX, is distributed to users free of charge under the Free Software Foundation. For this reason, this OS is often not considered professional. In fact, it can be described as a desktop version of the professional UNIX operating system. The advantage of the UNIX operating system is that its development and subsequent development took place simultaneously with the revolution in computing and communications, which has been going on for several decades. Completely new technologies were created on the basis of UNIX. By itself, UNIX is built so that it can be modified to produce different versions. Therefore, there are many different official variants of UNIX, as well as versions that suit specific tasks. The Linux operating system developed in this context can be considered as another version of UNIX, which was created specifically for the PC.

The Linux operating system has several editions, since each manufacturer completes the system and its software in its own way, releasing after that a package with its own edition of this system. At the same time, various editions may include modified versions of programs and new software.

4.13. Novell family of network operating systems

One of the first firms to produce both hardware and software for local area networks was Novell. At the moment, she is concentrating on LAN software. Novell is best known for its NetWare family of network operating systems, which are focused on networks with dedicated servers.

Novell focused on developing the highly efficient NetWare back end to provide the highest possible speed for remote file access and data security for this class of computer. For the server side of its systems, Novell has developed a specialized OS that is optimized for file operations and uses all the features of Intel x386 and higher processors. There are several stages in the evolution of Novell's network operating systems:

1) 1983 - the first version of NetWare was developed;

2) 1985 - the system Advanced NetWare v 1.0 appeared, expanding the functionality of the server;

3) 1986 - version 2.0 of the Advanced NetWare system, which differs from the previous ones in higher performance and the ability to combine different networks at the link level. This OS provided the ability to connect up to four networks with different topologies to one server;

4) 1988 - OS NetWare v2.15, which added support for Macintosh computers to NetWare;

5) 1989 - the first version of 32-bit OS for servers with 80386 microprocessor - NetWare 386 v3.0;

6) 1993 - OS NetWare v4.0, which became in many respects a revolutionary new product.

Versions of NetWare v4.xx have the following features:

▪ have a specialized network resource management system (NetWare Directory Services - NDS);

▪ memory management is carried out by only one area;

▪ the new Data Storage Management system contains three components: block fragmentation, or breaking data blocks into subblocks (Block Suballocation); file packaging (File Compression); data movement (Data Migration);

▪ include built-in support for the Packet-Burst Migration protocol;

▪ all system messages and interface use a special module;

▪ NetWare v4.xx OS management utilities are supported by DOS, Windows and OS/2 interface.

Flaws in NetWare v4.0x prevented it from winning the market. NetWare v4.1 became more widespread. The NetWare v5.x and NetWare v6 lines evolved from NetWare v4.x.

Topic 5. Basics of working in local and global computer networks

5.1. The evolution of computer networks

The concept of computer networks is a logical result of the evolution of computer technology. The first computers in the 1950s were large, bulky and expensive. Their main purpose was a small number of selected operations. These computers were not used for interactive user work, but were used in batch processing mode.

Batch processing systems have typically been based on the mainframe, which is a powerful and reliable general-purpose computer. Users prepared punched cards containing data and program commands and transferred them to the computer center. The operators entered these cards into a computer and gave the results to the users the next day. At the same time, one incorrectly stuffed card could lead to at least a daily delay.

For users, it would be much more convenient to have an interactive mode of operation, which implies the ability to quickly manage the data processing process from the terminal. However, at this stage, it was the batch mode that was the most efficient mode of using computing power, since it allowed more user tasks to be performed per unit of time than any other modes. At the forefront was the efficiency of the most expensive device of a computer, which was the processor, to the detriment of the efficiency of the specialists using it.

In the early 1960s processor production costs have decreased and new ways of organizing the computing process have appeared that allow taking into account the interests of users. The development of interactive multi-terminal time-sharing systems began. In these systems, several users worked on the computer at once. Each of them received at the disposal of the terminal, which helped him to communicate with the computer. At the same time, the reaction time of the computing system was short enough so that the user did not notice the parallel work with the computer of other users. By dividing the computer in this way, users could enjoy the benefits of computerization for a relatively small fee.

Terminals, when leaving the computer center, were dispersed throughout the enterprise. Although computing power remained completely centralized, many operations, such as data input and output, became distributed. These multi-terminal centralized systems outwardly became very similar to local area networks. In fact, each user perceived working at the mainframe terminal in much the same way as now working at a PC connected to the network. He had access to shared files and peripherals and was convinced that he was the sole owner of the computer. This was due to the fact that the user could run the program he needed at any time and get the result almost immediately.

Thus, multi-terminal systems operating in the time-sharing mode were the first step towards the creation of local area networks. However, before the advent of local networks, it was still necessary to overcome a long way, since multi-terminal systems, although they had the external features of distributed systems, still retained the centralized nature of information processing, and the need for enterprises to create local networks by this point in time had not yet matured. This was explained by the fact that in one building there was simply nothing to network. The high cost of computing technology prevented businesses from purchasing multiple computers. During this period, the so-called Grosz's law was valid, empirically reflecting the level of technology of that time. According to this law, the performance of a computer was proportional to the square of its cost, therefore, for the same amount it was more profitable to buy one powerful machine than two less powerful ones, since their total power turned out to be much lower than the power of an expensive machine.

However, the need to connect computers that were at a great distance from each other by this time was quite ripe. The development of computer networks began with solving a simpler problem - access to a computer from terminals many hundreds or even thousands of kilometers away from it. Terminals were connected to computers via telephone networks via modems. Such networks allowed numerous users to remotely access the shared resources of several powerful computers of the supercomputer class. After that, systems appeared in which, along with remote connections of the terminal-to-computer type, remote connections of the computer-to-computer type were also used. Computers were able to exchange data automatically, which is the basic mechanism of any computer network. Based on this mechanism, the first networks organized a file exchange service, synchronization of databases, e-mail and others, which have now become traditional network services.

So, chronologically, global computer networks were the first to be developed and applied. It was during the construction of global networks that almost all the basic ideas and concepts of existing computer networks were proposed and worked out, for example, the multilevel construction of communication protocols, packet switching technology, and packet routing in composite networks.

In the 1970s there was a technological breakthrough in the production of computer components, which resulted in the emergence of LSI. Their low cost and huge functionality made it possible to create mini-computers that became real competitors to mainframes. Grosz's law was no longer valid because ten minicomputers were able to perform some tasks much faster than one mainframe, and such a minicomputer system cost less.

Small divisions of enterprises could now purchase computers for themselves. Minicomputers were able to perform the tasks of managing technological equipment, a warehouse, and solving other problems corresponding to the level of an enterprise division, i.e., the concept of distributing computer resources throughout an enterprise appeared, but at the same time, all computers of one organization continued to work independently.

Over time, the needs of computer users increased, there was a need to be able to exchange data with other closely spaced computers. For this reason, businesses and organizations began to use the connection of their minicomputers and developed the software necessary for their interaction. As a result, this led to the emergence of the first local area networks. They were still significantly different from modern networks, in particular in the interface device. Initially, a wide variety of non-standard devices were used to connect computers to each other with their own methods of presenting data on communication lines, their own types of cables, etc. Such devices were able to connect only those types of computers for which they were designed. This situation gave rise to a great scope for creativity of students. The names of many course and diploma projects were dedicated to the interface device.

In the 1980s the state of affairs in local networks began to change dramatically. Standard technologies for connecting computers to a network appeared - Ethernet, Arcnet, Token Ring. A strong impetus for their development was given by the PC. These mass products have become ideal elements for building networks. On the one hand, they were quite powerful and capable of working with network software, and on the other hand, they needed to combine their computing power to solve complex problems. Personal computers began to predominate in local networks, not only as client computers, but also as data storage and processing centers, i.e. network servers, while displacing minicomputers and mainframes from their usual roles.

Conventional network technologies have turned the process of building a local network from an art into a chore. In order to create networks, it was enough to purchase network adapters of the appropriate standard, such as Ethernet, a standard cable, connect the adapters and cable with standard connectors, and install any of the available network operating systems, such as NetWare, on the computer. Now the network started to work, and connecting a new computer did not lead to problems. The connection occurred naturally if a network adapter of the same technology was installed on it.

Local networks in comparison with the global ones have introduced a lot of new technologies for organizing the work of users. Access to shared resources became much more convenient, since the user could simply examine the lists of available resources, rather than remembering their identifiers or names. When connecting to a remote resource, it was possible to work with it using the commands already known to the user for working with local resources. The consequence and at the same time the driving force behind such progress was the emergence of a large number of non-professional users who did not need to learn special (and rather complex) commands for networking at all. Local network developers got the opportunity to use all these conveniences with the appearance of high-quality cable communication lines, with the help of which even first-generation network adapters could provide data transfer rates up to 10 Mbps.

However, the developers of global networks did not suspect such speeds, since they had to use the communication channels that were available. This was due to the fact that the laying of new cable systems for computer networks thousands of kilometers long would cause enormous capital investments. Only telephone communication channels were available at that time, poorly adapted for high-speed transmission of discrete data - a speed of 1200 bit / s was a good achievement for them. For this reason, the economical use of the bandwidth of communication channels has become the main criterion for the effectiveness of data transmission methods in global networks. Under such conditions, various procedures for transparent access to remote resources, which are standard for local networks, remained an unaffordable luxury for global networks for a considerable time.

At the moment, computer networks are constantly evolving, and quite quickly. The separation between local and global networks is constantly decreasing, largely due to the emergence of high-speed territorial communication channels that are not inferior in quality to cable systems of local networks. In global networks, resource access services have emerged that are as convenient and transparent as local network services. Such examples are shown in great numbers by the most popular global network - the Internet.

Local networks will also be transformed. The passive cable connecting computers was replaced by various types of communication equipment - switches, routers, gateways. Due to the use of such equipment, it became possible to build large corporate networks that include thousands of computers and have a complex structure. There was a renewed interest in large computers. This was because, after the euphoria about the ease of working with PCs subsided, it became clear that systems consisting of hundreds of servers were more difficult to maintain than several large computers. Therefore, at a new stage of evolution, mainframes are returning to corporate computing systems. At the same time, they are full-fledged network nodes that support Ethernet or Token Ring, as well as the TCP / IP protocol stack, which has become the de facto networking standard thanks to the Internet.

Another important trend has emerged, affecting equally both local and global networks. They began to process information that was previously unusual for computer networks, such as voice, video images, and drawings. This has led to the need to make changes in the operation of protocols, network operating systems and communication equipment. The difficulty of transmitting this multimedia information over the network is due to its sensitivity to delays in the case of transmission of data packets. Delays most often cause distortion of such information in the end nodes of the network. Since conventional computer network services such as file transfer or e-mail generate latency-insensitive traffic and all network elements were invented with it in mind, the advent of real-time traffic has caused great problems.

At the moment, these problems are solved in various ways, for example, with the help of ATM technology specially designed for the transmission of different types of traffic. However, despite the great efforts made in this direction, it is still far from an acceptable solution to the problem, and much more needs to be done in this area in order to achieve the fusion of technologies not only for local and global networks, but also for technologies of any information networks - computer, telephone, television, etc. Despite the fact that today this idea seems unrealistic to many, experts believe that the prerequisites for such an association already exist. These opinions differ only in the estimation of the approximate terms of such an association - the terms are called from 10 to 25 years. At the same time, it is believed that the basis for the synthesis will be the packet switching technology used today in computer networks, and not the circuit switching technology used in telephony.

5.2. Main software and hardware components of the network

As a result of even a superficial examination of network operation, it is clear that a computer network is a complex set of interconnected and coordinated software and hardware components. The study of the network as a whole involves the study of the principles of operation of its individual elements, among which are:

1) computers;

2) communication equipment;

3) operating systems;

4) network applications.

All software and hardware of the network can be described by a multilayer model. The first is the hardware layer of standardized computer platforms. At the moment, computers of various classes are widely and successfully used in networks - from PCs to mainframes and supercomputers. The set of network computers must be compared with the set of various tasks that are solved by the network.

The second layer is the communication equipment. Although computers are central to the processing of information in networks, communication devices such as cabling, repeaters, bridges, switches, routers, and modular hubs have come to play an important role. At present, a communication device may be a complex, dedicated multiprocessor that must be configured, optimized, and administered. To make changes in the principles of operation of communication equipment, it is necessary to study the many protocols used in both local and wide area networks.

The third layer, which forms the software platform of the network, is the operating system. The type of management concepts for local and distributed resources underlying the network operating system determines the efficiency of the entire network. When designing a network, one should take into account how easily this system can interact with other operating systems of the network, how much it is able to ensure the safety and security of data, and to what extent it allows increasing the number of users.

The fourth, topmost layer of networking tools includes various network applications, such as network databases, mail systems, data archiving tools, collaboration automation systems, etc. It is important to know the range of capabilities that applications provide for various applications, as well as that they are compatible with other network applications and OS.

5.3. Types of local networks

In order to connect two PCs together, they are connected with a special null-modem cable. This cable is connected when the PC is turned off, and for each connection method, you should use a different type of cable.

If a direct PC connection is used, then there are two types of their interaction:

1) direct access, in which only the transfer of information from one computer to another is possible;

2) remote control, in which it is possible to execute a program hosted on another computer.

With direct access, one of the computers is the master, and the second is the slave. Manages the operation of computers interconnected, the user from the host PC. In this case, it is important to perform the following preparatory operations:

▪ installation of software components Client, Protocol, Services;

▪ installation of the Microsoft network file and printer access service. The flag must be checked on the computer that provides the resources. Files on this computer can be shared;

▪ ensuring access at the resource level;

▪ definition as shared resources of the PC server participating in the exchange;

▪ connection from a client computer to shared information resources.

All actions on the Direct Connection command are performed by the Direct Connection Wizard using successive windows of the Direct Connection dialog. These windows indicate which of the computers is the slave and which is the master; port used for communication; the login password to use.

In the last Direct Connection window, if the parameters are set correctly, click on the Receive Commands button on the host computer, and on the Manage button on the slave computer. After that, the master PC can use the shared resources of the slave and the entire local network if the slave PC is connected to the network.

With remote control, the server is, as it were, an extension of the client. The basic synchronization scheme includes the following steps:

1) combination of stationary and portable computers. The desktop computer must be the host, and the folders containing the necessary files must be shared;

2) copying files from a stationary computer to a portable computer in the Portfolio folder;

3) disconnecting a portable computer from a stationary one and further editing files in the Portfolio folder;

4) reconnecting the portable computer to the stationary computer from which the source files were originally copied to the Portfolio folder. In this case, the portable computer must be the slave computer, and the folders with source files on the desktop computer must be shared;

5) opening the Portfolio folder and executing the Portfolio/Refresh command. If the original files have remained unchanged over the past period, all modified files in the Portfolio folder will be automatically copied to the place of the original ones. For files modified on a desktop PC, a warning will be issued, after which you must select any of the following actions:

▪ update on a laptop PC;

▪ update on a desktop PC;

▪ cancel any update.

Not all objects can be synchronized using the Portfolio/Update command, but only a group of files marked in the folder.

5.4. Organization of the domain structure of the network

When computers are networked on the Windows NT platform, they are grouped into workgroups or domains.

A group of computers that make up an administrative block and do not belong to domains is called a work computer. It is formed on the Windows NT Workstation platform. Any computer in a workgroup includes its own information on user and group budgets and does not share it with other computers in the workgroup. Members who are members of workgroups log on only to the workstation and can browse the directories of other workgroup members over the network. Peer-to-peer network computers form workgroups, which should be formed based on the organizational structure of the enterprise: accounting workgroup, planning department workgroup, personnel department workgroup, etc.

A workgroup can be created based on computers with different operating systems. Members of this group can play the role of both users of resources and their providers, i.e. they are equal. The right to provide other PCs with access to all or some of the local resources at their disposal belongs to the servers.

When the network includes computers of different capacities, the most productive computer in the network configuration can be used as a non-dedicated file server. At the same time, it can store information that is constantly needed by all users. The rest of the computers operate in network client mode.

When you install Windows NT on a computer, you specify whether it is a member of a workgroup or a domain.

A logical grouping of one or more network servers and other computers that share a common security system and information in the form of a centrally managed database of user budgets is called a domain. Each of the domains has an individual name.

Computers belonging to the same domain can be located on a local network or in different countries and continents. They can be connected by various physical lines, such as telephone, fiber optic, satellite, etc.

Each computer in a domain has its own name, which, in turn, must be separated by a dot from the domain name. A member of this name is a computer, and the domain forms the fully qualified domain name for the computer.

A domain controller is the organization of a domain structure in the network, the establishment of certain rules in it, and the management of interaction between the user and the domain.

A computer that runs Windows NT Server and uses a single shared directory to store user budgeting and domain-wide security information is called a domain controller. Its task is to manage within the domain the interaction between the user and the domain.

All changes to the information about the budgets of the domain are selected, stored in the catalog database, and constantly replicated to the backup domains by the main domain controller. This ensures centralized management of the security system.

Several models for building a network with a domain architecture are used:

▪ single-domain model;

▪ model with a master domain;

▪ model with several master domains;

▪ model of completely trusting relationships.

5.5. Multilevel approach. Protocol. Interface. protocol stack

Communication between devices on a network is a complex task. To solve it, a universal technique is used - decomposition, which consists in dividing one complex task into several simpler tasks-modules. Decomposition consists of a clear definition of the functions of each module that solves a particular problem, and the interfaces between them. As a result, a logical simplification of the task is achieved, in addition, it becomes possible to transform individual modules without changing the rest of the system.

When decomposing, a multilevel approach is sometimes used. In this case, all modules are divided into levels that form a hierarchy, i.e., there are overlying and underlying levels. The modules that make up each level are formed in such a way that, in order to perform their tasks, they make requests only to those modules that are directly adjacent to the lower levels. However, the results of the work of all modules that belong to a certain level can only be transferred to the modules of the neighboring higher layer. With this hierarchical decomposition of the problem, it is necessary to clearly define the function of each level and the interfaces between the levels. An interface establishes a set of functions provided by a lower layer to a higher layer. As a result of hierarchical decomposition, significant independence of levels is achieved, i.e., the possibility of their easy replacement.

Means of network interaction can also be presented in the form of a hierarchically organized set of modules. In this case, the lower level modules are able, in particular, to solve all issues related to the reliable transmission of electrical signals between two neighboring nodes. Higher-level modules will create message transport throughout the entire network using the lower-level tools for this. At the top level, there are modules that provide users with access to various services, including a file service, a print service, and so on. However, this is only one of many possible ways to divide the overall task of organizing networking into private, smaller subtasks.

The multi-level approach applied to the description and implementation of system functions is used not only in relation to network facilities. This action model is used, for example, in local file systems, if an incoming request for access to a file is processed in turn by several program levels, first of all, by the top level, which sequentially parses the compound symbolic name of the file and determines the unique file identifier. The next level finds by a unique name all the remaining characteristics of the file: address, access attributes, etc. After that, at a lower level, the access rights to this file are checked, and then, after calculating the coordinates of the file area containing the necessary data, a physical exchange is performed with an external device using the disc driver.

The multilevel representation of network interaction tools has its own specifics, which is related to the fact that two machines participate in the exchange of messages, i.e., in this case, the coordinated work of two "hierarchies" should be organized. When transmitting messages, both participants in the network exchange must accept many agreements. For example, they need to agree on the levels and shape of electrical signals, how to determine the length of messages, agree on ways to check the validity, etc. Thus, agreements must be accepted for all levels, from the lowest, which are the levels of transmission of bits, to the very high, which performs a service for network users.

Modules that implement the protocols of neighboring layers and are located in the same node also interact with each other in accordance with well-defined norms and using standardized message formats. These rules are called interface. An interface is a set of services that a given layer provides to its neighbor layer. In fact, the protocol and the interface define the same concept, but traditionally in networks they have been assigned different scopes: protocols assign rules for the interaction of modules of the same level in different nodes, and interfaces define modules of neighboring levels in the same node.

The means of any of the levels must work out, firstly, their own protocol, and secondly, interfaces with neighboring levels.

A hierarchically organized set of protocols, which is sufficient to organize the interaction of nodes in a network, is called communication protocol stacks.

Communication protocols can be implemented both in software and in hardware. Lower layer protocols are most often implemented by a combination of software and hardware, while upper layer protocols are usually implemented purely in software.

A software module that implements a protocol is often also referred to as a protocol for short. In this case, the relationship between a protocol - a formally defined procedure and a protocol - a software module that performs this procedure, is similar to the relationship between an algorithm for solving a certain problem and a program that solves this problem.

The same algorithm can be programmed with varying degrees of efficiency. Similarly, a protocol may have several software implementations. Based on this, when comparing protocols, it is necessary to take into account not only the logic of their work, but also the quality of software solutions. In addition, the quality of the entire set of protocols that make up the stack affects the efficiency of interaction between devices in the network, in particular, how rationally the functions are distributed between the protocols of different levels and how well the interfaces between them are defined.

Protocols are organized not only by computers, but also by other network devices, such as hubs, bridges, switches, routers, etc. In general, the connection of computers in a network is not carried out directly, but through various communication devices. Depending on the type of device, it requires certain built-in tools that implement one or another set of protocols.

5.6. Organization of accounts. User group management

All information about a user that is necessary to identify him and work on a Windows NT network is called an account. It is created for each user and contains a unique name, which is typed by the user when registering on the network, and a password to enter the network.

When creating an account, you must enter the following information:

1) a user group that includes a user;

2) the path to the user profile, which defines the user's environment and the programs available to him;

3) the time at which the user is allowed to enter the network;

4) a workstation through which a given user can enter the network;

5) the validity period of the account and the type of account;

6) user rights to remote access and callback facilities.

Use account management to make changes to accounts. These changes may include: changing the password, renaming the account, changing the user group (deleting from one and including in another), blocking access, deleting the account. Domain controller accounts can be valid for other domains as well, and those domains must be trusted.

Windows NT 4 introduced the concept of managing user groups. The basis of this concept is the assignment of rights to a whole group of users at once and the execution of access control by adding and removing users from different groups. This account management approach grants all access rights to the group that the account is placed in.

User accounts that have access to servers and workstations in their own and other domains with which a trust relationship has been established are called global groups. They are managed by the user manager for domains.

Local groups consist of user accounts that have access only to resources on the local system within its own domain, and user accounts of global groups that have access to servers that are part of their domain.

Administrators are a group responsible for the overall configuration of a domain and its servers. This group has the most rights. It includes the domain administrators global group, which has the same rights as administrators.

Budget operators have the right to create new groups and user accounts. However, they have limited rights to administer accounts, servers, and domain groups. Groups of users, domain users, domain guests, guests also have rights with significant limited capabilities. It is possible to copy, correct and delete user-created groups. The group management wizard has the right to add and create users. It works semi-automatically and provides step-by-step assistance with the following administrative tasks:

▪ creation of user accounts;

▪ group management;

▪ controlling access to files and folders;

▪ input of printer drivers;

▪ installation and uninstallation of programs;

▪ Licensing management;

▪ Administration of network clients.

5.7. Security policy management

One of the most important administrative tasks is to manage the security policy. It includes: interactive user authentication, user access control to network resources, audit.

Interactive user authentication is performed by pressing the Ctrl + Alt + Del keys, which leads to the launch of the WINLOGIN utility, which opens the Login window.

When a user joins a workgroup, their account is created and stored in the SAM (computer RAM) of their workstation, and the local authentication software contacts the workstation's SAM database to validate the logon parameters entered. If a user registers in a domain, then the call to verify the entered registration parameters occurs to the SAM database of the domain to which his machine belongs.

User access to network resources is controlled by applying the user's budget, user or user group rules, object access rights, etc.

The user's budget is formed by the administrator after the account is created. The budget includes the network time, the scope of the OP that is granted to the user, and other user rights in the system.

The rules that set the actions available for use are called the rights of a user or user group. The granted rights and restrictions that are imposed on an individual user or group of users determine the user's ability to access network resources.

The user can have ordinary and advanced rights. Typically, extended rights are granted only to programmers and sometimes administrators of workstations, but not to user groups.

The system policy editor is used to adjust and set new rights for a certain user by the administrator.

In Windows NT, administrative functions are most often performed using the User Manager, Server Manager, and others.

User rights are set by the administrator when the user account is created. System elements in Windows NT are objects, and each object is defined by a type, a set of services, and attributes.

Object types in Windows NT are directories, files, printers, processes, devices, windows, and so on; they affect the allowable sets of services and attributes.

The set of actions performed by or with an object is a set of services.

The object name, data, and access control list are part of the attributes. The access control list is a required property of an object. This list contains the following information: a list of the object's services, a list of users and groups that have permission to perform each action.

If necessary, some user rights can be protected: access rights to objects are determined by the security descriptor.

NTFS file system permissions (write, read, execute, delete, change permissions) are included in local rights.

Control over. remote rights is exercised by shared resources, which, in turn, are controlled by a network resource that allows users of remote computers to access objects over the network.

Audit is used to record all events that occur in the local network; it informs the administrator about all prohibited user actions, provides an opportunity to obtain information about the frequency of access to certain resources, and establish the sequence of actions that users performed.

There are three levels of audit management:

1) enabling and disabling auditing;

2) audition of any of the seven possible types of events;

3) checking specific objects.

5.8. Network resource management

Network resource management is multifaceted and includes the following tasks:

1) selective compression of NTFS volumes, folders and files, carried out to save disk space. Spreadsheets, text files, and some graphic files can shrink by several times;

2) archiving data and solving similar problems;

3) development of scenarios that are set by a set of commands. Among them are: a script for automatically executing tasks when a user registers in the system, a script for a certain user's own directory, establishing appropriate network links when using different user names, surnames, etc.;

4) folder replication to other computers, which authorizes the replication of registration scripts from one domain controller to another, databases from one server to another in order to maintain and organize trust relationships;

5) management of the launch and operation of services jointly with the service manager. These may include applications that run on the server in the background and provide support for other applications;

6) system performance monitoring, carried out using the System Monitor program;

7) disk management using the Disk Administrator program, including creating basic and extended partitions, formatting partitions, creating spanned volumes, etc.;

8) optimizing the operation of Windows NT 4 as a file server, as an application server (controlling the application server processor, controlling virtual memory, eliminating network problems), etc. In this case, the operation of hard drives is optimized, disk access problems are eliminated at the program level, network bandwidth;

9) print service management. Maintenance of printers is carried out through the use of a program that is accessed through the Printers folder from the control panel or Settings;

10) managing the entry of computers into the domain of your server, organizing domains, deleting computers, assigning a server as the main domain controller, replicating data to other servers, merging domains, managing trust relationships between domains, auditing network resources for each user, etc. All of the above actions are performed using Server Manager and User Manager for Domains;

11) management of shared resources. When a computer boots into Windows NT, default system shares are created for each of the system's disks to support networking and manage internal operations;

12) setting remote access control. The installation of the remote access client and server is carried out using the Network utility from the control panel. Modems, protocols and communication ports are installed using the same utility;

13) management of all connections in the network and access to information of the remote access server for which the Remote Access Management utility is used;

14) Troubleshooting the network using Network Monitor, which you can use to view packets coming into and out of Windows NT.

5.9. Network Services

For the user, the network is not computers, cables and hubs, and not even information flows, but is primarily a set of network services that allow you to view a list of computers available on the network or a remote file, print a document on a "foreign" printer or send a mail message. It is the combination of these features - how wide their choice is, how convenient, reliable and safe they are - that sets the look of each of the networks for the user.

In addition to the data exchange itself, network services are designed to solve other, more specific, tasks, in particular, those generated by distributed data processing. These are tasks aimed at ensuring the consistency of several copies of data hosted on different machines (replication service), or organizing the execution of one task simultaneously on several network machines (remote procedure call service). From network services, administrative services can be distinguished, i.e., focused not on a simple user, but on an administrator and designed to organize the correct operation of the network as a whole. These include: the user account administration service, which allows the administrator to maintain a common database of network users; network monitoring system, which functions include capturing and analyzing network traffic; a security service that, among other things, performs login procedures followed by password verification, etc.

The operation of network services is performed by software. The main services are the file service and the print service, which are usually provided by the network OS, while the secondary services are the database, fax, or voice services, which are performed by system network applications or utilities that work closely with the network OS. The distribution of services between the OS and utilities is quite arbitrary and varies in specific implementations of this system.

When developing network services, it is necessary to solve problems inherent in any distributed applications, including the definition of an interaction protocol between the client and server parts, the distribution of functions between them, the choice of an application addressing scheme, etc.

One of the main indicators of the quality of a network service is its convenience. For the same resource, you can develop several services that solve the same task in different ways. The main problems lie in the performance or level of convenience of the services provided. For example, a file service might be based on a command to transfer a file from one computer to another by file name, and this requires the user to know the name of the desired file. The same file service can be organized so that the user mounts the remote file system to a local directory and then accesses the remote files as if they were his own, which is much more convenient. The quality of the network service is determined by the quality of the user interface - intuitiveness, clarity, rationality.

In the case of determining the degree of convenience of a shared resource, the term "transparency" is often used. Transparent access is such that the user does not notice where the resource he needs is located - on his computer or on a remote one. After mounting a remote file system into its directory tree, access to remote files becomes completely transparent to it. The mount operation itself can also have varying degrees of transparency. In networks with less transparency, the user needs to know and specify in the command the name of the computer that stores the remote file system; in networks with a greater degree of transparency, the corresponding software component of the network searches for shared file volumes, regardless of where they are stored, and then shows them to the user in a form convenient for him , such as a list or a set of icons.

To achieve transparency, the way of addressing (naming) shared network resources is important. The names of such resources should not depend on their physical location on a particular computer. At best, the user should not change anything in their work if the network administrator has moved the volume or directory between computers. The administrator and the network OS have information about the location of file systems, but it is hidden from the user. This degree of transparency is still rare in networks. Most often, to gain access to the resources of a particular computer, you should establish a logical connection with it. This approach is used, in particular, in Windows NT networks.

5.10. Tools that provide interaction with other network operating systems

A network operating system can be called an OS that interacts with network equipment and provides intercomputer communications. The user interface to the network allows you to share files and peripherals. The Windows NT operating system is capable of interacting and exchanging data with many existing networks based on various network support systems. Circumstances that may lead to this need may be: the presence of networks already built on the basis of other operating systems, resources required by Windows NT users; creation of new networks based on Windows NT, and other operating systems supporting networks to improve their efficiency.

Interoperability of networks built on Windows NT with other network support operating systems are designed to provide the following facilities.

1. Open network structure, dynamic loading and unloading mechanisms built-in network support for various network components. These mechanisms can be used to load and unload third-party software, allowing Windows NT to support many different network protocols, network cards, and drivers.

2. Protocols compatible with other networks and communicating with them that support Windows NT. The remote access service uses the following protocols to transfer data from one local network to another remote local network via the Internet: РРР - parallel connection protocol over several telephone channels; SLIP - Internet protocol for a serial link; PPTP is a protocol containing an encryption mechanism for the Internet.

3. Network drivers and interfaces. They allow Windows NT to connect to different types of networks and interact with different types of computing systems.

4. Multi-user remote access service for systems with Windows NT Server and single-user remote access for Windows NT Workstation systems. It provides remote WAN access to a Windows NT system. Network connections that are based on different operating systems that support networks are able to serve the remote access service server. This is done thanks to the ability to translate messages from one format to another, as well as the presence of a multi-network access router that performs the establishment and termination of a network connection, remote printing and data transmission over the network to a network component that processes resource requests.

5. The ability to run many applications for different operating systems due to the presence of various APIs in Windows NT. The Win-32 I/O API protocol is required when processing file I/O requests that are located on a remote machine, etc.

6. Built-in support for various types of file systems (NTFS, FAT, CD-ROM, VFAT, Macintosh), which has the ability to convert FAT and HPFS partitions to NTFS partitions, support for Macintosh format directories in NTFS partitions.

7. Support for Windows NT and NetWare shared directory services NTDSmNDS. For example: secure directory base, distributed architecture, network single sign-on, simple administration.

8. Ability to connect new users to domains, such as users of other networks, maintaining the required level of system security by establishing trust relationships between domains. These include built-in WANs, which are used to connect LANs to LANs over the WAN.

5.11. Organization of work in a hierarchical network

Hierarchical networks have one or more servers. They contain information that is used simultaneously by different users. There are file servers, database servers, print servers and mail servers.

The file server hosts shared files and shared programs. Workstations host only a small portion of these programs, which require negligible resources. Programs that allow this mode of operation are called network installable programs.

On the database server there is a database, for example, "ConsultantPlus", "Garant", "Bank customer accounts", etc. The database on the server can be replenished from various workstations or information can be provided upon request from a workstation. In this case, there are three fundamentally different modes of processing requests from a workstation or editing records in the database:

1) database records are sequentially sent from the server to the workstation, where the records are filtered and the necessary ones are selected. In this case, the requirements for the server are reduced, but the load on the network channels and the requirements for the computing power of workstations increase;

2) the server selects the required records from the database and sends them to the workstation. This reduces the load on the network and reduces the level of requirements for workstations. In this case, the requirements for the computing power of the server increase dramatically. This method is the best and is implemented by special tools for working with modern network databases;

3) the "drain-spill" mode is used with low power of the server, workstation or network. It is used to enter new records or edit them if the database record can be changed no more than once a day.

To create a print server, a fairly productive printer is connected to a low-power computer, which is used to print information simultaneously from several workstations.

The mail server is designed to store information sent and received both via the local network and from the outside via a modem. At the same time, the user can view the information received for him at any convenient time or send his own through the mail server.

For each user, three areas are allocated on the server hard disk:

1) personal, available only to a user with all rights, for example, creating folders and files in it, editing and applying files, deleting them. Other users are not granted access to "other people's private areas", they do not see them using the file system, since private areas are used to store the user's confidential information;

2) general, to which all network users have simultaneous access with the right to read and write. This area is used to exchange information between different network users or workstations. To do this, information from the user's personal area or from the workstation's local disk is written to the public area. From this area, another user overwrites it to his personal area or to the local disk of another PC;

3) a reading area in which the user can only read information.

In order to access the private area on the server, the user must complete the network logon or network registration procedures. The procedure for logging on to the network is carried out after turning on or restarting the computer.

5.12. Organization of peer-to-peer networks and technology of work in them

The user can install the peer-to-peer software. The software components for managing this network allow you to organize a direct cable connection between two PCs using a null modem cable. Peer-to-peer networks are called peer-to-peer computers (workstations) in which there is no server part of the software. Each workstation installs client software, which consists of four components:

1) client - a program that implements the general functions of managing the interaction of a workstation with other computers on the network;

2) services - a program that sets the type of access to resources and ensures the transformation of a specific local resource into a network one and vice versa;

3) protocol - a program that controls the transfer of information in the network;

4) network card - a driver that controls the operation of the network adapter, however, when organizing a direct cable connection between a PC, this component may be absent.

Keep the following in mind when installing network software components.

1. To organize a peer-to-peer network (as a client), you must install the Client for Microsoft Networks program. Peer-to-peer networks allow reading and editing shared information resources, as well as launching a program from a "foreign computer". At the same time, each user can have their own desktop view, a set of icons on it, personal settings for working on the Internet, etc.

2. Select File and Printer Sharing for Microsoft Networks as the Service for Microsoft Peer-to-Peer Networks or direct cable connection.

3. The type of protocol is determined by the type of installed client and the type of network card. In this case, the protocol is often automatically installed during installation.

4. For network cards of the Rpr class, the software component Network card should be used. The card is installed automatically when the PC is restarted, if drivers for the network card are included in the Windows drivers.

When organizing work in a peer-to-peer network, you should use the resources of different computers. A workstation resource in a peer-to-peer network is any of the following elements:

▪ long-term memory devices, including logical disks of HDDs, drives and other similar devices (information);

▪ folders, with or without lower-level subfolders (informational);

▪ connected to a computer, including printers, modems, etc. (technical).

A computer resource that is accessible from other computers on a network is called a shared or network resource, as well as a shared resource. Allocate shared information resources and shared technical devices. The concepts of local and shared resource are dynamic; this means that any local resource can be converted to a network resource and back at any time by the "master" of the workstation.

Before using a network resource in peer-to-peer networks, the following organizational measures must be taken:

▪ clarify the composition of shared resources and select the computers on which they will be located;

▪ determine the circle of users who gain access to them;

▪ give information to future consumers of this resource about the names of the PCs on which they were created, the network names of the resources, rights and passwords to access them;

▪ create groups, if necessary, and include in it all PCs that will be given access to this resource.

5.13. Modem types of networks

A modem is a device that allows the exchange of information between computers using the telephone network. For the duration of the communication session, both computers must be connected to a telephone line using a modem.

Fax modems have a special scheme that allows you to exchange information not only between computers, but also between computers and fax devices. Fax modems are capable of operating in two modes: modem mode and fax modem mode, and at the same time exchange fax messages. In both cases, the individual elements of work are similar in a number of respects, the capabilities of each mode and the technology of working with them differ significantly.

The use of a modem makes it possible to produce the following network information technologies and information services.

1. Direct connection. This is the simplest way to connect two computers and organize the exchange of information between them without intermediaries and additional fees. If the system of hourly payment for telephone calls is not applied, then modem operation within the local telephone network is free of charge. When a modem connection has been established using a cellular or long-distance connection, payment is made according to the time-based tariff established for this type of connection. Direct communication is provided by special switching programs.

Once a connection is established between the computers, the circuit programs immediately allow you to transfer files between them. When direct switching is used, any type of file or text information directly typed on the keyboard can be transferred. The type of document that is transmitted or received during the transmission of messages may either be the same or differ depending on the transmission method used.

2. Communication with the bulletin board (BBS). In this case, there is a connection to a computer or local network, in which there is a database and special software that implements the query language, searches the database for the necessary information and copies it to the subscriber's computer. Within the local telephone network, the services of these information systems are provided to all users and are free of charge. To work with the BBS, you can use circuit programs and special software that is read from the BBS itself after the first call to it using the circuit program. In addition to copying files, some BBS offer additional features - address correspondence between its subscribers or the placement of messages addressed to a specific group of subscribers or all BBS subscribers.

3. Remote access. This is one way to connect to a separate computer or office LAN. After this connection, the remote computer acquires the status of a full-fledged workstation of this network, and the modem simultaneously performs the functions of a network card.

4. Connection to global networks. A global network is a network of computers distributed around the world, which provides information and other types of services on a commercial basis to everyone. Connection to the global network is carried out after connecting to a computer or local network via an intermediary modem - provider. Sites are called powerful information nodes, which are computers or local networks of providers connected by high-speed channels with nodes of other providers around the world and together forming a global network. The most famous global network is the Internet. The provider provides services on a commercial basis, and in order to receive them, a contract must be concluded in advance.

5.14. Installing and configuring the modem

Working with the modem includes a one-time installation stage and operations that are performed during each communication session. Modem installation is understood as its physical and software connection.

The physical connection method is determined by the type of modem. The modem can be internal or external. The internal modem is a board that plugs into an expansion slot on the motherboard. When applied, an additional asynchronous (COM) port is created. Configuring this port may require a certain degree of user skill. In this case, the modem is not transportable. The advantages of an internal modem include its cheapness, and the fact that it does not require a separate connection to the electrical network, does not use a COM port, and is ready to work immediately after turning on the computer.

External modems are stand-alone devices that are connected by special cables to a PC through asynchronous ports. This type of modem requires a connection to the mains, most often through the voltage converter supplied with it.

Both types of modem, when physically connected, can interface with a voice phone. There are the following connection methods:

▪ the modem is connected to the telephone socket, and the telephone is connected to the modem;

▪ Both the telephone and the modem are connected to the telephone socket through the connector on it.

Connection with the subscriber with both connection methods is carried out both using a telephone and via a modem. Only the device (modem or telephone) from which the telephone number is dialed first is active (holding the line). In switching programs, when using the first connection method, after talking on the phone and without breaking the connection, transfer control to the modem, after which, after hanging up the handset, to carry out a modem communication session. This connection method is convenient when you need to call the subscriber in advance in order to warn him about the beginning of the session and specify the communication parameters. But the second way of pairing the modem and phone, and the presence of a parallel phone or fax machine, makes the modem work worse.

The modem in Windows programmatically connects to the OS as a new device. A software connection is performed using the New Device Connection Wizard, which is called by the Control Panel / Hardware Installation / Modem command. The brand of the connected modem is indicated by the user in the list of modems recognized by the OS, or it is determined automatically. When modem drivers are supplied by the modem manufacturer, it is installed in the usual way: by clicking the Install from Disk button or by using the installation program with the Start/Run command. After programmatically connecting the modem in the Windows system, you can configure its parameters by performing the following sequence of actions:

1) activate the My Computer/Control Panel/Modems icon;

2) select a specific modem in the opened Modems window by clicking on the Properties button;

3) set the necessary values ​​for the configuration parameters of the modem operation in the fields of the General and Establish connection tabs.

The port speed characterizes the speed of information exchange between the PC and the modem. In this case, the port speed is set in the Maximum speed field of the General tab of the Modem Properties window. If it is necessary to limit the transmission speed on the line, then reduce the speed on the port, but the connection parameters in the Connection tab are not changed.

5.15. Establishing a connection with a remote personal computer

When using a modem, any communication session begins with establishing a connection with a remote computer. This connection in Windows is provided by the Remote Network Access program, which is automatically installed when Windows is installed. In this case, at the time of installation, the modem must be physically connected to the PC and turned off. In the window of this program, for each telephone number, a special Connection element is automatically created, the properties of which indicate the telephone number.

To create a Connection icon, follow the steps below, only the first step is required.

1. Create a new icon. In the Remote Connection program window, click on the New Connection icon, and then in the subsequent windows of the Connection Creation Wizard, specify the name of the connection and the subscriber's telephone number. After this, an icon is created with the specified name, the recipient's phone number and some standard set of parameters that control the process of connecting with the subscriber. These parameters can be changed using the steps in the next paragraph.

2. Configure dialing parameters. The parameters in this group depend on the type of telephone line used; they control the connection establishment technology. To change parameters, double-click on the icon of the desired connection, and in the Connection Establishment window that opens, click on the Parameters button. You need to make all the necessary changes in the Dialing Options window. The meaning of most parameters is as follows:

▪ the dialing type determines the dialing system used, which can be pulse or tone. When a new connection is made, the tone mode is set by default, so most often it needs to be changed to pulse. This is advisable if the measures described below are not applied, otherwise the connection will not be established (this applies to all types of connections, including connections to the Internet);

▪ the Call location field allows you to have several types of number parameters for the same connection. This is convenient to use when you have to establish a connection from a laptop computer from different places that differ in the method of calling the subscriber. For example, in one case directly, and in the other - through a switch, or in one case from a line with tone dialing, and in the other - with pulse dialing. In this case, click on the Create button, after which in the Call Place field you must enter a name that defines the corresponding set of parameters. After this, you need to set the necessary parameter values, the setting of which is completed by clicking the Apply button. The call location is then selected during the call establishment process.

3. Coordination of communication parameters with the PC-subscriber, which sets the protocols for data transfer to the subscriber and other characteristics that are necessary for connecting to a remote computer. The most important parameters are set in the Server type tab. These settings are especially important when establishing a connection to the Internet.

Connection with a specific subscriber is made using:

▪ double-click the Connection icon in the Remote Access program window. Frequently used connections can have their icons displayed on the desktop for ease of access;

▪ double-click on the connection icons that appear in the windows of the switching programs;

▪ specifying the name of the desired connection, which is made in special fields of Internet programs. It is required to ensure that the required connection is automatically established.

5.16. Working with circuit programs

Switching, or terminal, programs allow using a modem to organize the exchange of information between two remote PCs, as well as work with BBS.

With direct switching, it is possible to exchange text information in an interactive mode, when the text typed on the keyboard of one PC is immediately reproduced on the subscriber's monitor. With the help of such switching, you can transfer files from one PC to another. To do this, both computers must be connected to a telephone line via a modem, and HyperTerminal must be loaded on them. After that, one of the computers becomes the caller, and the other - the waiter. The distribution of functions between computers is determined by the preliminary agreement of subscribers. When establishing a connection between computers, the actions should include the following steps:

1) on the waiting computer in the HyperTerminal window, double-click the Hypertrm icon, and then click the Cancel button. An empty New connection window will open, which is the working window of HyperTerminal, and in the menu of this window you need to execute the commands Communication / Wait for a call;

2) after performing the above actions on the waiting PC, on the calling PC, in the NuregTerminal window, double-click the icon of the receiving PC or double-click the HyperTerminal icon to create the Connection icon. After that, the connection between the calling computer and the waiting computer begins.

The connection to the BBS is made using a circuit program. The control program will require a user login name and a password when connecting to the BBS for the first time. Both the password and the name are assigned by the user himself. To receive mail addressed to the user on a subsequent connection to the BBS, you must enter the correct name and password in the Connection window. After that, the control program, like the Wizards in modern operating systems, will generate a menu sequence on the monitor. For example, menu items assign the following actions:

▪ return to the previous menu;

▪ calling the BBS system operator to exchange messages interactively;

▪ viewing the contents of text files or archives;

▪ selecting a file search topic from the list of topics provided;

▪ view a list of files in the selected area;

▪ specifying a list of files to copy to your computer;

▪ uploading files to BBS;

▪ viewing mail and sending it to specific recipients;

▪ logout and end of session, etc.

A modem is used for remote access to a single computer and network. With its help, you can organize remote control of one master computer by another, slave computer. In this case, the keyboard of the master computer becomes, as it were, the keyboard of a slave; To do this, the Remote Access Server program must be installed on the slave computer. Its installation in the first case should be requested during the installation of Windows, and in the second case, it should be done a little later using the Start / Settings / Control Panel / Add or Remove Programs command. After that, in the Communication group, mark the flag of the Remote Access Server program. When it is installed, to allow control of this computer from a remote computer, launch the Remote Access program and execute the menu command Connections/Remote Access Server in its window. Then, in the windows that open, you need to set the protocols and password for access to the user's computer. Next, you need to create a Connection to access this computer, specifying in its properties and parameters all the values ​​\uXNUMXb\uXNUMXbnecessary for connection and access.

5.17. Working with a fax modem

When exchanging information not only with other computers, but also between PCs and facsimile devices, modern modems are used. Using a modem, it is possible, for example, to send a message from a computer to a fax machine and vice versa. A modem operating in this mode is called a fax modem. Work with this device is carried out with the help of special switching programs or universal organizer programs. Fax setup is carried out after installing the modem or when installing fax programs, or when accessing fax for the first time. A fax icon is placed in the Printers group, and the fax itself, like the printer, is connected to a special "logical" port. After installing the fax, this port can also be accessed from other applications as a printer. One way to fax a document created by an application is to print it using the Print command. In this case, the installed fax is indicated as a printer. Changing the fax operation parameters and setting it up is done in the Properties window for the corresponding fax in the Printers group.

A fax message can be sent using:

1) the program in which the document was prepared. This method is easiest if the File menu of the program that prepared the document has Print or Send commands. An appropriate fax is set as a printer and a print command is issued;

2) organizer programs;

3) switching programs that have the ability to send fax messages.

When sending a message, a window appears in which it is necessary to fill in the message header containing the following fields:

▪ To - with one or more addresses of message recipients;

▪ Copy - with the addresses of recipients of copies, while in some systems the main recipients may or may not be notified of the presence of copies;

▪ Subject - brief information about the message.

To simplify the assignment of addresses, there are address books that include a list of frequently used addresses, as well as message forms that contain entire headers of various types.

Messages can contain text directly typed in a special window, and an attachment (text, graphic and other files or a spreadsheet). The message can only include attachments. It looks like this when sent from an application program on a Print or Send command. Messages are protected from illegal access in various ways: password, keys, electronic signature, etc.

When sending a message, you can specify:

▪ urgency of delivery - immediately, exactly on the specified date and time, within a certain time interval at a “cheap rate”;

▪ the presence and type of title page separating one message from another;

▪ print quality and paper size;

▪ the need to confirm receipt of the message and the method of protection;

▪ the number of repeated attempts to resend a message when this cannot be done immediately;

▪ the need to save the message.

You can receive messages automatically and manually. The modem and computer must be turned on during automatic reception, and the communication program must be running when sending a message (if the mail server is not involved in the exchange process). Auto Receive Fax must be set to Receive Fax Automatically.

Topic 6. Internet networks

6.1. The emergence of the Internet

In 1962, D. Licklider, the first director of an experimental network research computer project whose purpose was to transmit packets to the US Department of Defense Advanced Research Projects Agency (DARPA), published a series of notes discussing the concept of a "galactic networks" (Galactic Network). It was based on the assertion that in the near future a global network of interconnected computers would be developed, allowing each user to quickly access data and programs located on any computer. This idea was the beginning of the development of the Internet.

In 1966, at DARPA, L. Roberts began work on the concept of a computer network, and the ARPANET plan soon appeared. At the same time, the main data transfer protocols in the network - TCP / IP - were created. Many public and private organizations wanted to use the ARPANET for daily data transmission. Because of this, in 1975, ARPANET went from experimental to operational.

In 1983, the first standard for TCP / IP protocols was developed and officially implemented, which was included in the Military Standards (MIL STD). In order to facilitate the transition to new standards, DARPA put forward a proposal to the leaders of Berkley Software Design to implement TCP / IP protocols in Berkeley (BSD) UNIX. After some time, the TCP / IP protocol was reworked into a common (public) standard, and the term "Internet" began to be used. In parallel, MILNET was separated from ARPANET, after which MILNET became part of the Defense Data Network (DDN) of the US Department of Defense. After that, the term "Internet" began to be used to refer to a single network: MILNET plus ARPANET.

In 1991, the ARPANET ceased to exist. But the Internet exists at the moment and develops. At the same time, its dimensions are much larger than the original ones.

The history of the development of the Internet can be divided into five stages:

1) 1945-1960 - the appearance of theoretical works on the interactive interaction of a person with a machine, as well as the first interactive devices and computers;

2) 1961-1970 - the beginning of the development of the technical principles of packet switching, the commissioning of ARPANET;

3) 1971-1980 - expansion of the number of ARPANET nodes up to several dozens, construction of special cable lines that connect some nodes, the beginning of the functioning of e-mail;

4) 1981-1990 - the implementation of the adoption of the TCP / IP protocol, the division into ARPANET and MILNET, the introduction of a "domain" name system - Domain Name System (DNS);

5) 1991-2007 - the latest stage in the development of the history of the global Internet.

6.2. Internet capabilities

The Internet is a global computer network that covers the whole world and contains a huge amount of information on any subject, available on a commercial basis to everyone. On the Internet, in addition to receiving information services, you can make purchases and commercial transactions, pay bills, order tickets for various types of transport, book hotel rooms, etc.

Any local network is a node, or site. The legal entity that ensures the operation of the site is called the provider. The site includes several computers - servers used to store information of a certain type and in a certain format. Each site and server on the site is assigned a unique name that identifies them on the Internet.

To connect to the Internet, the user must conclude a service contract with any of the existing providers in his region. To start working on the network, you need to connect to the provider's website. Communication with the provider is carried out either via a dial-up telephone channel using a modem, or using a permanent dedicated channel. When connecting to a provider via a dial-up telephone channel, communication is carried out using a modem and remote access tools. If communication with the provider is made through a permanent dedicated channel, then a simple call to the appropriate program for working on the Internet is used. The opportunities that open up to the user are determined by the terms of the contract concluded with the provider.

With the help of keywords throughout the Internet, each information system has its own means of finding the necessary information. The network includes the following information systems:

1) World Wide Web (WWW) - the World Wide Web. Information in this system consists of pages (documents). With the help of the WWW, you can watch movies, listen to music, play computer games, access various information sources;

2) FTR-system (File Transfer Program). It is used to transfer files that are available for work only after copying to the user's own computer;

3) e-mail (E-mail). Each of the subscribers has his own e-mail address with a "mailbox". It is some analogue of a postal address. Using e-mail, the user is able to send and receive text messages and arbitrary binary files;

4) news (teleconferencing system - Use Net Newsgroups). This service consists of a collection of documents grouped under specific topics;

5) IRC and ICQ. With the help of these systems, information is exchanged in real time. These functions on Windows are performed by the MS NetMeeting application, which allows you to share pictures and add text with other users on remote workstations.

Search, management and control tools on the Internet include:

▪ WWW search systems - used to search for information organized in one of the above methods (WWW, FTR);

▪ Telnet - a mode for remote control of any computer on the network, used to launch the necessary program on the server or any computer on the Internet;

▪ Ping utility - allows you to check the quality of communication with the server;

▪ Whois and Finger programs - used to find the coordinates of network users or determine the users currently working on a specific host.

6.3. Internet software

In order for the Internet system to function, there are the following programs:

1) universal programs or software packages that provide access to any Internet service;

2) specialized programs that provide more opportunities when working with a specific Internet service.

Browsers are called programs for working with the WWW. They are usually supplied as a set of software tools that provide all networking capabilities.

The most used complexes are Netsape Communicator complexes of various versions and Microsoft Internet Explorer (IE) versions 4.0 and 5.0. In Microsoft terminology, these complexes are called browsers. One of the important advantages of IE is that, along with the functions of the browser, it is also used as a file system explorer of the local computer. At the same time, work with the IE complex as a conductor is organized according to the same principles as work as a browser. It should be taken into account that the work is carried out in the same window, with the same menu, tool buttons and tools. Using IE eliminates the difference between working with the local computer's file system and working with the WWW. At the same time, IE is closely related to MS Office programs, providing work on the Internet directly from these programs. Such MS Office programs can be Word, Excel, Access, Power Point, etc.

In addition to the browser for working with the WWW, the IE complex includes the Outlook Express (OE) program. It is used for e-mail and teleconferencing. Thanks to the complexity of IE, the browser and Outlook Express are delivered as a single installation package. These programs can be installed simultaneously, have common settings, be called from each other and exchange information.

MS Office contains MS Outlook organizer programs (which are not included in the IE complex), which provide, among many of their functions, the ability to work with e-mail and News. The MS Outlook organizer can completely replace Outlook Express. In cases where it is not rational to use MS Outlook as an organizer, but only as a means of working on the Internet, it is preferable to work with Outlook Express.

In addition to the listed programs included in the IE complex, there are many programs from various companies designed to work with e-mail and FTR servers. They can be purchased and installed separately from the IE complex. Thanks to these programs, the user can get additional convenience.

Internet access is made through the provider. To contact him, use one of the following methods:

▪ Internet access via dial-up lines or Dial-Up. In this mode, the main limitation is the quality of the telephone line and modem;

▪ permanent connection to the Internet via a dedicated line. This method of work is the most advanced, but the most expensive. It automatically provides access to all Internet resources.

When concluding a contract with a dial-up telephone line provider, it is necessary that information be provided, which later needs to be specified as parameters in various communication programs with the provider. These programs are used when working directly on the Internet. When concluding a contract for Dial-Up-access, the provider is obliged to set a certain set of parameters for each subscriber.

6.4. Transfer of information on the Internet. Addressing system

On the Internet, by analogy with local area networks, information is transmitted in the form of separate blocks, which are called packets. If a long message is transmitted, it should be divided into a certain number of blocks. Any of these blocks consists of the address of the sender and recipient of data, as well as some service information. Any data packet is sent over the Internet independently of the others, while they can be transmitted by different routes. After the packets arrive at their destination, they form the original message, i.e., the packets are integrated.

There are three types of addresses used on the Internet:

1) IP address - the main network address assigned to each computer when entering the network. An IP address is represented by four decimal numbers separated by dots, such as 122.08.45.7. In each position, each value can range from 0 to 255. Any computer connected to the Internet has its own unique IP address. Such addresses can be divided into classes according to the scale of the network to which the user is connected. Class A addresses are used in large public networks. Class B addresses are used in medium-sized networks (networks of large companies, research institutes, universities). Class C addresses are used in networks with a small number of computers (networks of small companies and firms). You can also select class D addresses, intended for accessing groups of computers, and reserved class E addresses;

2) domain address - a symbolic address that has a strict hierarchical structure, for example yandex.ru. In this form of addresses, the top-level domain is indicated on the right. It can be two-, three-, four-letter, for example:

▪ com - commercial organization;

▪ edu - educational institution;

▪ net - network administration;

▪ firm - private company, etc.

On the left side of the domain address, the name of the server is used. Translation of a domain address into an IP address is done automatically by the Domain Name System (DNS), which is a method of assigning names by transferring responsibility for their subset of names to network groups;

3) URL address (Universal Recourse Locator) - a universal address that is used to designate the name of each object of storage on the Internet. This address has a specific structure: data transfer protocol: // computer name/directory/subdirectory/. /File name. An example of a name is http://rambler.ru/doc.html.

6.5. Internet addressing and protocols

A host is a computer connected to the Internet. Each host on the network is identified by two address systems that always work together.

Like a telephone number, an IP address is assigned by an ISP and consists of four bytes separated by dots and ending with a dot. Any computer on the Internet must have its own IP address.

In the domain name system, DNS names are named by the provider. Such a fully qualified domain name as win.smtp.dol.ru includes four simple domains separated by dots. The number of simple domains in a fully qualified domain name is arbitrary, and each simple domain describes some set of computers. In this case, the domains in the name are nested in each other. The fully qualified domain name must end with a dot.

Each of the domains has the following meaning:

▪ gu - country domain, denoting all hosts in Russia;

▪ dol - provider domain, denoting computers on the local network of the Russian company Demos;

▪ smtp - domain of the Demos server group, serving the email system;

▪ win - the name of one of the computers from the smtp group.

Of particular importance are the top-level domain names, located on the right side of the full name. They are fixed by the international organization InterNIC, and their construction is carried out on a regional or organizational basis.

The URL addressing system is used to indicate how information is organized on a particular host and the information resource hosted on it. For example, the URL might be written as follows: http://home.microsoft.com/intl/ru/www_tour.html. The elements of this address entry denote:

▪ http://- prefix, which indicates the protocol type, indicating that the address refers to a host that is a WWW server;

▪ home.microsoft.com - domain name of the host. A colon after the domain name may contain a number indicating the port through which the connection to the host will be made;

▪ /intl/ru/ - subdirectory of the root intl directory of the host;

▪ www_tour.html - file name (file extension can include any number of characters).

Remembering a long URL is difficult, which is why all Internet software has a Favorites tool. The existing networking tools provide convenient conditions for creating, storing and applying links. Among them are:

▪ the presence of a special Favorites folder. It exists in all WWW programs; you can create nested thematic folders in it. Examples of such folders may be, in particular, Banks, Socio-economic indicators, Analytical forecasts;

▪ introduction of tool buttons in the toolbars of Internet programs for using the most popular links;

▪ location of links or their shortcuts directly on the Desktop or in the taskbar;

▪ Automatic transfer of links from the Favorites folder to the Favorites menu item that appears when you click the Start button.

The E-mail address system is used to identify the e-mail addressee. This address must not contain spaces.

Addressing in the news system is similar to addressing with a domain name. Each group of characters, separated by dots, forms a subject. Each topic in the conference name, like the DNS, is a collection of some set of articles.

6.6. Problems of working on the Internet with Cyrillic texts

Different encoding systems were used for Cyrillic texts in DOS and Windows systems. DOS used ASCII codes that corresponded to code page 866, and Windows used an encoding that corresponded to code page 1251. Therefore, texts prepared in a text editor running under DOS could not be read directly in Windows and required recoding. The texts that were prepared by the Windows editors looked like gibberish if they were tried to be read in DOS encoding. To eliminate this problem, transcoders were created that were built into some text editors and provided transcoding from DOS to Windows and vice versa.

In the case of working with the Internet, the problem worsened. This was explained by the fact that Cyrillic characters were encoded in a third way, using the KOI8 code table. It was traditionally used in computers that ran the UNIX operating system. Initially, Internet servers were built exclusively on the basis of UNIX, as a result of which Russian-language texts were encoded only using KOI8. This explained the fact that on the Internet the Russian-language text was abracadabra when played in an encoding different from the one in which it was originally created. This problem can be solved when working in the WWW using the buttons on the screen that allow you to re-display the page of the document in a different encoding.

Difficulties with Cyrillic texts also arise when saving them. This can happen during further offline (outside the Internet) work with texts.

Save WWW pages in two ways:

1) saving in the same HTML format in which it was present on the Internet. In this case, such a file can be viewed and edited, firstly, with the same software that provided its viewing when working directly on the Internet, and secondly, with other specialized editors focused on working with the HTML format;

2) saving the document in the form of a plain text file. In this case, textual information is saved without formatting elements. A document is stored in ASCIL codes if it was created using code pages 866 or 1251 (in DOS or Windows). Such a document can be read and edited both in DOS and in Windows, but when transcoding it at the time of loading into Word, you must specify "Text Only" as the transcoding method, and not "DOS Text".

Protocols can be used for the following purposes:

1) implementation in the global network of the specified host addressing system;

2) organization of reliable information transfer;

3) transformation and presentation in accordance with the way it is organized.

The main protocol used when working on the Internet is TCP / IP, which combines the transfer protocols (TCP) and host identification protocols (IP). In fact, working on the Internet when accessing a provider using a modem via a dial-up telephone line is performed using one of two modifications of the TCP / IP protocol: using the SLIP protocol or PPP (a more modern protocol).

When a user uses only e-mail without realizing all the Internet, it is enough for him to work using the UUCP protocol. It's a little cheaper, but the user experience is degraded.

For some information services, in addition to network-wide protocols, their own protocols are used.

6.7. Establishing a connection with the provider (Internet access)

When carrying out any kind of work in global networks, the initial step is to connect to the provider via modem. The connection method (Dial-Up, dedicated channel) determines the method of connection with the provider and access to the Internet. Let's analyze the connection in the Dial-Up connection mode using the TCP/IP protocol, meaning that the TCP protocol is already installed in the Start/Settings/Control Panel/Network/Configuration window.

There are two ways to connect to the provider:

1) using the Remote Access tool, after which programs for working with the Internet are called;

2) through a special program for working with the Internet, such as Microsoft Internet Explorer. If there is no connection with the provider, the program itself establishes a connection with it.

In both cases, it is necessary to create a Connection, with the help of which communication with the provider is organized. In this case, the TCP / IP communication protocol must be configured in a special way. To create such a Connection, you can use the Internet Connection Wizard. Its shortcut is most often located on the Desktop. The Internet Connection Wizard can also be called directly from Internet Explorer (IE). In version IE5, for this purpose, you need to execute the menu commands Tools / Internet Options / Connection and click the Install button in the window that opens, then follow the instructions of the Wizard. After these procedures, not only the Connection will be made, but also the TCP / IP protocol will be configured in the necessary way. It is useful to be able to do this setting yourself by doing the following:

1) creating a regular Connection with the provider's phone number;

2) click on the created Connection with the right mouse button and select the Properties command from the context menu;

3) select the Server type tab in the opened window, and also:

▪ determining the type of remote access server (usually PPP);

▪ checking the TCP/IP Network Protocol checkbox and unchecking all other flags in this window. If it is necessary to mark other flags, you need to clarify this according to the instructions of the provider;

▪ click on the TCP/IP Settings button;

4) mark in the opened window Configuring TCP/IP selectors. The IP addresses at the top of the window are assigned by the server, while the addresses in the center of the window must be entered manually. In the center of the window, you should also set the provider's IP addresses. In the same window, the flags Use IP header compression and Use default gateway for remote network are most often placed. The meaning of the last flags must be checked with the provider. To implement the operation of such a connection, it is necessary that in the Control Panel / Network / Configuration in the Binding tab of the Properties window for the Remote Access Controller the TCP / IP flag is checked.

If the provider has several input phones, a separate connection is created for each of them. Any connection must be configured by the user in the specified way.

The password for connecting to the provider can be entered each time during the connection process or remembered and specified automatically. When connecting to an ISP, a certain message is displayed, in which a certain transfer rate is given; if this speed does not suit the user, then the connection must be terminated and repeated again.

6.8. World Wide Web, or WORLD WIDE WEB

The possibilities of the WWW provide access to almost all the resources of most major libraries in the world, museum collections, musical works, legislative and government regulations, reference books and operational collections on any topic, and analytical reviews. The WWW system has now become an intermediary and ensures the conclusion of contracts, the purchase of goods and settlements on them, the booking of transport tickets, the selection and ordering of excursion routes, etc. In addition, it conducts a public opinion survey, politicians and businessmen. Usually, any reputable company has its own WWW-page. The creation of such a page is quite accessible to every Internet user.

WWW provides interaction between distributed networks, including networks of financial companies.

WWW features include:

▪ hypertext organization of information elements, which are WWW pages;

▪ the potential to include modern multimedia and other means of artistic design of pages into WWW pages, unlimited possibilities for placing information on the screen;

▪ the ability to post various information on the owner’s website;

▪ the existence of free, good and simple software that allows a non-professional user not only to view, but also to create WWW pages;

▪ the presence of good search engines among the software, allowing you to quickly find the necessary information. The existence of convenient means of remembering the addresses where the necessary information is located, as well as its subsequent instant reproduction if necessary;

▪ the ability to quickly move back and forth through pages already viewed;

▪ the existence of means to ensure the reliability and confidentiality of information exchange.

Efficient and easy work with the WWW is ensured by the availability of search systems for the required information. For any kind of resources on the Internet, there are search engines, and the very work of search engines on the WWW is based on searching by keywords. For this purpose, it is possible to specify various masks or patterns and logical search functions, for example:

▪ search for documents that contain any of the specified keywords or phrases;

▪ search for documents that include several keywords or phrases.

All search tools can be divided into the following groups according to the method of organizing the search and the opportunities provided: catalogs and specialized databases, search and metasearch engines.

Catalogs on the WWW are similar in structure to organized library catalogs. The first page of the catalog contains links to major topics, such as Culture and Art, Medicine and Health, Society and Politics, Business and Economics, Entertainment, etc. If the desired link is activated, a page with links detailing the selected topic opens.

Search tools (search servers, search robots) enable the user, according to established rules, to formulate requirements for the information he needs. After that, the search engine automatically scans the documents on the sites it controls and selects those that meet the requirements put forward by the user. The search result may be the creation of one or more pages containing links to documents relevant to the query. If the search result led to the selection of a large number of documents, you can refine the query and repeat the search in accordance with it, but already among the selected pages.

6.9. intranet

An Intranet is a local or geographically distributed private network of an organization that is characterized by built-in security mechanisms. This network is based on Internet technologies. The term "Intranet" appeared and became widely used in 1995. It means that the company uses Internet technologies within (intra-) its local network. The advantage of using an intranet is to enable all company employees to access any information necessary for work, regardless of the location of the employee's computer and the available software and hardware. The main reason for using the Intranet in commercial organizations is the need to speed up the processes of collecting, processing, managing and providing information.

Often, companies that do e-business on the Internet form a mixed network, in which a subset of the internal nodes of the corporation forms an Intranet, and the external nodes connecting to the Internet are called Extranets (Extranet).

The basis of applications on the Intranet is the use of Internet and, in particular, Web technologies:

1) hypertext in HTML format;

2) HTTP hypertext transfer protocol;

3) CGI server application interface.

In addition, the Intranet includes Web servers for static or dynamic publishing of information, and Web browsers for viewing and interpreting hypertext. The basis of all Intranet application solutions for interacting with the database is the client-server architecture.

For various organizations, the use of intranets has a number of important advantages:

1) On the intranet, each user on a configured workstation can access any of the most recent versions of documents as soon as they are placed on the Web server. In this case, the location of the user and the Web server does not matter. This approach in large organizations allows for very significant cost savings;

2) documents on the Intranet are able to update automatically (in real time). In addition, when publishing a document on a Web server, at any time it is possible to obtain information about which of the company's employees, when and how many times accessed the published documents;

3) many organizations use applications that allow access to company databases directly from a Web browser;

4) access to published information can be made via the Internet if there is a password for access to the company's internal databases. An external user who does not have a password will not be able to access the firm's internal confidential information.

6.10. Creating a Web Page Using Front Page

Creating Web pages most often and most effectively is done using the Microsoft FrontPage 2000 Web editor, which is ideal for learning HTML programming and the art of developing your own Web sites.

The FrontPage 2000 editor is part of the Microsoft Office 2000 suite and can also be purchased as a standalone program.

Key features of FrontPage 2000 include:

1) creating and saving Web pages on a computer hard drive and directly on the Internet;

2) downloading Web pages from the Internet and editing them;

3) viewing and administration of the Web page;

4) development of complex design;

5) the use of ready-made HTML tags;

6) use of ready-made drawings;

7) use of ActiveX controls and scripts in Web pages.

To develop a new Web page, execute the commands File/ New/ Page or press the key combination Ctrl+N. In this case, the New dialog box will appear on the screen, in which you should select the required page template or go to the Frames Pages tab (Frames). Also, the formation of a new page according to the Normal Page template can be done using the New button on the standard toolbar.

Saving Web pages is done using the Save command of the File menu or by pressing the key combination Ctrl + S. The name of the page is entered in the dialog box that appears, and its type is determined in the Save as type list. Saving a page on the Web or on a hard drive is done by specifying its location in the field at the top of this dialog box.

You can enter text into a new Web page using the keyboard, copy it from other documents, or use drag-and-drop. Entering text from the keyboard is done in the same way as in any text editor. To insert images into a Web page, select the Picture command from the Insert menu.

Any image on a Web page can be associated with a hyperlink. This is done by selecting the desired pattern and on the General tab of the dialog box.

In order to create a hypertext link, you need to select text or an image, select the Hyperlink command from the Insert menu or the context menu. In the URL field that appears in the window, enter the URL address.

The properties of the created Web page are shown in the Page Properties dialog box, which is opened with the File/Properties command.

To publish Web pages, select the File/Publish Web command or press the button of the same name on the standard toolbar. In the resulting dialog box, you must specify the location of the Web page, options for publishing modified or all pages, and protection options. When you click the Publish button, the created Web pages will appear on the Internet.

6.11. FTP File Information Resources

The FTP system is a repository of various types of files (spreadsheets, programs, data, graphics, sound) that are stored on FTP servers. These servers are built by almost all major companies. The most common type of DNS name is ftp.<company name>.com.

By accessibility, information on FTP servers is divided into three categories:

1) freely distributed files (Freeshare), if their use is non-commercial;

2) protected information, access to which is provided to a special circle of registered users for an additional fee;

3) files with Shareware status. The user is able to try them out for free for a certain period of time. After this time, to continue the operation, you must register on the server and pay the cost of the file.

When you log in to the FTP server, you need to register with your ID and password. If there is no special registration system on the server, then it is recommended to indicate the word Anonymous as an identifier, and your E-mail address as a password. When accessing files of the Freeshare or Shareware category, this type of registration is used by the server developers to record and statistically analyze the circle of users.

Information on an FTP server is in the form of traditional directories. The directory names are in random order. Files on FTP servers are divided into text (in ASCII codes) and binary (documents prepared by Windows editors). These files are sent over the network in various ways. In the file copy program, you must specify the type of file to be transferred or set the Autodetect mode. In the latter mode, some programs consider that only files with the TXT extension are text files, while other programs provide the ability to specify a list of text files. Sending a binary file as a text file can lead to loss of information and its distortion during transfer. If you don't know what kind of file it is, you must send it as a binary file, which in turn can increase transfer time. Binary type files are converted to "pseudo-text" files to reduce transfer time. Uuencode programs are used for this.

It is possible to copy a file from an FTP server using a browser, but it is more convenient to do this using special programs (WSFTP or CuteFTP). Both programs have two types of windows:

1) some analogue of the address book, in which the conditional meaningful names of FTP servers, their URLs, the identification name and login password, as well as other information common to the server are formed;

2) working window for direct work with the server.

When using these programs, the desired server is first selected from the address book. Then a connection is automatically established with it, after which a working window opens, which includes two panels. One of them corresponds to the user's computer, and the other to the server. Both panels contain a tree of directories with files. Navigating the tree and activating directories on both panels proceeds in the usual way. The selected files are marked and copied by command (clicking on the appropriate button) to the current directory of the local computer. When the connection is broken, these programs allow you to continue sending the file from the interrupted place.

In order to find a file by its name or name fragment, you need to use the Archie search engine, which is hosted on numerous servers. A constantly updated list of Archie servers is available on the Internet.

6.12. E-mail (E-mail)

E-mail allows you to quickly transfer messages and files to a specific recipient and provides access to any other Internet resources.

There are two groups of protocols by which e-mail works:

1) SMTP and POP (or POPXNUMX) protocols. The SMTP (Simple Mail Transfer Protocol) protocol helps with the transfer of messages between Internet destinations and allows you to group messages to a single recipient address, as well as copy E-mail messages for transmission to different addresses. The POP (Post Office Protocol) protocol allows the end user to access the electronic messages that have come to him. When requesting a user to receive mail, POP clients are asked to enter a password, which provides increased confidentiality of correspondence;

2) IMAP protocol. It allows the user to act on emails directly on the provider's server and, therefore, spend less time browsing the Internet.

Special mail programs are used to send and receive e-mail messages. These programs are used to:

▪ composing and transmitting messages both in the form of text messages and in HTML format, adding directly to the text of the message in the form of graphics, animation, sound;

▪ adding files of any kind to messages (creating attachments). Attachments are displayed as icons that are placed in special areas of the email. Icons include the name of the attached file and its size;

▪ decryption of a message received in various Cyrillic encodings;

▪ managing the priority of sending messages (urgent, regular);

▪ reducing communication time if you need to view received mail. In this case, at first only the headers (short content) of the message are issued and only specially requested messages are sent in full;

▪ automatic spelling and grammar checking of messages before sending;

▪ storing in the address book the necessary E-mail addresses of message authors for further use of these addresses when sending messages.

Preparing and sending messages on the screen of the mail program is filled using the following fields:

1) To whom. This field is filled with the E-mail address of the main correspondent;

2) Copy. In this field, enter the addresses of correspondents who will receive a copy of the message;

3) Bcc. The purpose of the field is similar to the previous one, but even if there are addresses in it, the main correspondent is not aware of the presence of copies sent to these addresses;

4) Subject. This field contains a summary of the message. The text is given in the form of a message header when the addressee views incoming mail;

5) Messages. The text of the message is entered in this field. In mail programs, a text editor is used for this.

Attaching a file is carried out by a menu command or using a tool button; this opens a window familiar to Windows with a directory tree to select the attached file. The prepared message is sent by the Deliver Mail command. The message in this case goes to a special mail folder Outbox. Sending a message to the network is determined by the specified degree of urgency. An urgent message is sent immediately. In some programs, sent messages are sent to the Sent Items folder, where they can then be viewed or deleted by mail readers. If the delivery of the message for some reason turned out to be impossible (due to an error in the address), then the sender is automatically informed about this. The notice is in the form of an email in a folder.

6.13. News, or conferences

The conference is a collection of text messages, articles of its subscribers. Placing an article in the conference is called publication.

To work with the news, either Outlook Express or MS Outlook is used. Conference Action Programs provide:

▪ indication of the set of conferences in which the computer user plans to participate. This operation is called a subscription, and the set of conferences to which a subscription is made is called a subscription list. It is possible to make changes to any subscription list;

▪ viewing the names of authors and titles (topics) of articles in each specific conference from the subscription list;

▪ familiarization with the contents of articles and saving them in a file in some predetermined directory on the user’s computer;

▪ publication of your own article in a specific conference;

▪ individual response to the author of any article to his E-mail address;

▪ a collective response to the author of a specific article, appearing as a conference article.

The following settings apply to working with conferences:

1) DNS name of the provider's server where the conference articles are stored. This server is called NNTP, and its name must be specified in the contract with the provider;

2) username to identify the author when viewing the titles of articles;

3) E-mail address of the user in order to provide the possibility of personal addressing the response to the article.

There are three types of windows for working with conferences in the software:

1) conference subscription window;

2) a viewing window in which headings and the content of articles of conferences are noted;

3) a window for creating articles. This window forms a public response to the article.

Each of the windows can be called by the corresponding menu command or by clicking on the tool button.

In the subscription window, you can display either a complete list of all conference groups supported by the NNTP server, or only a list of conferences that have been subscribed to. In each of the lists, you can display a subset of conferences that have a name containing a given combination of characters. To add a conference to the subscription list, double-click on the conference name; to exclude a conference from the list, you must also double-click on its name in the subscription list.

The Viewer window appears when you call Outlok Express, and other windows are called from it. This window contains:

▪ a drop-down list listing the names of conferences from the subscription list, as well as the Outbox, Inbox, Sent, Deleted folders;

▪ headings field, which indicates the list of articles contained in the conference or folder selected in the previous paragraph. Only original articles may be included in the list. It is possible to exclude articles from the list that have already been read;

▪ content field, in which the main content of the article is displayed in the title. An article often includes attached files.

The article can be sent to the conference, and a copy - by e-mail to any addressee.

The article creation window must be opened when creating a new article, public or private response to the author. Working with this window is similar to creating and sending an e-mail. An article can be created in any of the following formats: HTML, Uuencode, MIME. If the message is sent in HTML format, it will be output when read in the same format, otherwise the message will be output as plain text with an HTML file attachment. The recipient will be able to view the attached file with full formatting in any WWW page viewer.

6.14. Electronic commerce. Online store. Internet payment systems

E-commerce is the acceleration of most business processes by conducting them electronically. In the mid 1990s. e-commerce began to grow rapidly all over the world, and numerous sellers of traditional goods appeared.

E-commerce uses many different technologies: EDI, email, Internet, Intranet, Extranet.

The most advanced information technology used by e-commerce is the Electronic Data Interchange (EDI) protocol, which eliminates the need for processing, mailing and additional input into computers of paper documents.

Electronic commerce on the Internet can be divided into two categories: B2C - "company-consumer" and B2B - "company-company".

The main model of B2C (business-to-business) trade is online retail stores, which are a developed structure for meeting consumer demand.

B2C e-commerce within the Internet has taken on a new meaning. The B2B marketplace was created for organizations to support interaction between companies and their suppliers, manufacturers and distributors. The B2B market is able to open up great opportunities compared to the B2C trading sector.

The main B2B model is online retail stores, which are technically a combination of an electronic storefront and a trading system.

To purchase any product in the online store, the buyer must go to the Web site of the online store. This Web site is an electronic storefront that contains a catalog of goods, the required interface elements for entering registration information, placing an order, making payments via the Internet, etc. In online stores, customers register when placing an order or entering a store.

The Internet server hosts an e-commerce storefront, which is a Web site with active content. Its basis is a catalog of goods with prices, containing complete information about each product.

Electronic storefronts perform the following functions:

▪ providing an interface to the database of offered goods;

▪ work with the buyer’s electronic “basket”;

▪ placing orders and choosing a method of payment and delivery;

▪ registration of buyers;

▪ online assistance to the buyer;

▪ collection of marketing information;

▪ ensuring the security of buyers’ personal information;

▪ automatic transmission of information to the trading system.

The buyer who has chosen the goods must fill out a special form, which includes the method of payment and delivery of the goods. After placing an order, all collected information about the buyer is transferred from the electronic storefront to the trading system of the online store. The availability of the required product is checked in the trading system. If the product is not available at the moment, the store sends a request to the supplier, and the buyer is informed of the delay time.

After payment for the goods, when it is transferred to the buyer, confirmation of the fact of the order is necessary, most often by e-mail. If the buyer can pay for the goods via the Internet, a payment system is used.

The most popular purchases in online stores include: software; computers and accessories; tourist service; Financial services; books, video cassettes, discs, etc.

6.15. Internet auctions. Internet banking

An online auction is an electronic trading showcase through which the user can sell any product. The owner of an online auction receives a commission from any of the transactions, while the turnover of online auctions is much larger than the turnover of the rest of the rest of the online retail trade.

The world's largest auction firms are also moving online. Any goods can be offered at online auctions. However, there are certain groups of goods that are most suitable for auction trading:

1) computers and components, high-tech goods;

2) discounted goods;

3) slow-moving goods;

4) recent sales leaders;

5) collectibles.

Auctions can be classified based on their division in the direction of growth or decrease in rates, which, in turn, can increase from the minimum to the maximum and vice versa.

A regular auction does not have a reserved or floor price; the goods are given to the buyer in exchange for paying the maximum price.

In a public auction, the current maximum bid and bid history are available to each participant and visitor. There are no restrictions for participants, except for the guarantee.

A private auction is a bid that is accepted for a strictly limited time. In this case, the participant can make only one bet and does not have the opportunity to find out the size and number of bets of other participants. After the end of the agreed period, the winner is determined.

A silent auction is a variation of a private auction where the bidder does not know who has bid but can find out the current maximum bid.

In a floor auction, the seller offers the item and determines the minimum starting selling price. When bidding, buyers know only the size of the minimum price.

A reserved price auction differs from a floor auction in that the bidders know the set floor price but do not know its value. When the minimum price is not reached during the auction during the bidding process, the item remains unsold.

A Danish auction is an auction where the starting price is set exaggeratedly high and is automatically reduced during the bidding process, and the price reduction stops when the bidder stops the auction.

The basis for the emergence and development of Internet banking are the types of remote banking used in the earlier stages of the existence of banking. Through the Internet banking system, a bank client can carry out the following operations:

1) transfer of funds from one of your accounts to another;

2) implementation of non-cash payments;

3) purchase and sale of non-cash currency;

4) opening and closing deposit accounts;

5) determination of the settlement schedule;

6) payment for various goods and services;

7) control over all banking transactions on your accounts for any period of time.

When using Internet banking systems, a bank client gains some advantages:

1) significant time savings;

2) the ability to monitor your financial resources 24 hours a day and better control them, quickly respond to any changes in the situation in the financial markets;

3) tracking operations with plastic cards to increase the client's control over their operations.

The disadvantages of Internet banking systems include the problems of ensuring the security of settlements and the safety of funds in customer accounts.

6.16. Internet insurance. Internet exchange

Internet insurance is currently a frequently used financial service provided via the Internet.

Insurance is the process of establishing and maintaining relations between the insured and the insurer, which are fixed by the contract. The insurer determines the various options for insurance programs offered to the insured. If the client chooses any insurance option, then both parties conclude an insurance contract. From the commencement of the insurance contract, the policyholder undertakes to pay lump-sum or regular sums of money specified in the concluded contract. In the event of an insured event, the insurer must pay the insured a monetary compensation, the amount of which was established by the terms of the insurance contract. An insurance policy is a document that certifies the conclusion of an insurance contract and contains the obligations of the insurer.

Internet insurance is a complex of all the elements of the relationship between the insurance company and its client listed above, arising in the process of selling an insurance product, servicing it and paying insurance compensation (using Internet technologies).

Online insurance services include:

1) filling out the application form, taking into account the selected program of insurance services;

2) ordering and direct payment for an insurance policy;

3) calculation of the amount of the insurance premium and determination of the conditions for its payment;

4) making periodic insurance payments;

5) maintenance of the insurance contract during its validity period.

When using Internet technologies for insurance companies, the client receives the following benefits:

1) reduction of capital costs in the creation of a global service distribution network;

2) a significant reduction in the cost of providing services;

3) creation of a permanent client base of the most active consumers.

An Internet exchange is a platform through which the state, legal entities or individuals trade in goods, services, shares and currencies. The electronic trading system is a central server and local servers connected to it. Through them, access to trading platforms is provided to trade participants. The advantages of the Internet exchange include the external simplicity of concluding transactions and reduced tariffs for the services of on-line brokers. The investor can use the advice of a broker or do without them.

Internet exchanges perform the following functions:

1) timely provision of necessary information to bidders;

2) organization of trade in goods between enterprises;

3) automated process of payment and delivery of goods;

4) cost reduction.

Among the well-known Internet exchanges, the following can be distinguished: oil exchanges, agricultural product markets, precious metals market, stock markets, foreign exchange markets.

The main segments of the global financial market include the precious metals market, stock and currency markets.

Commodities on stock markets are shares of various companies. Commodities on the foreign exchange market are the currencies of various countries. The foreign exchange market has a number of significant advantages compared to the securities market:

1) trading on the foreign exchange market can be started with a small initial capital;

2) in the foreign exchange market, transactions are carried out according to the principle of margin trading;

3) the functioning of currency exchanges occurs around the clock.

A trader is a natural or legal person who carries out transactions on his own behalf and at his own expense, whose profit is the difference between the purchase and sale prices of a commodity, share or currency.

6.17. Internet Marketing. Internet advertising

Marketing is a system for managing the production and marketing activities of an organization. Its goal is to obtain an acceptable amount of profit through accounting and active influence on market conditions. When creating a marketing concept for a company, the fundamental differences between the Internet and traditional media should be taken into account:

▪ The Internet consumer is an active component of the communication system. The use of the Internet allows for interaction between suppliers and consumers. In this case, consumers themselves become suppliers, in particular providers of information about their needs;

▪ the consumer’s level of awareness about the subject on which he is trying to find information is much higher than that of a person who watches an advertisement for the same product on TV;

▪ it is possible to exchange information directly with each consumer;

▪ the conclusion of a transaction is achieved through the interactivity of the Internet environment itself.

Any marketing campaign on the Internet is based on a corporate Web site around which the entire marketing system is built. In order to attract visitors to a particular Web server, a company must advertise it through registration in search engines, Web directories, links to other Web sites, etc. Marketing activities on the Internet are carried out due to the following advantages of e-mail marketing:

▪ Almost every Internet user has an email;

▪ there is the possibility of influencing a specific audience;

▪ modern email clients support the html format of letters.

The advantage of Internet marketing over other, more traditional forms of marketing is the lower cost of an advertising campaign. This is due to the fact that the Internet has a much larger audience than conventional media. The advantages of Internet marketing are also the ability to direct the flow of advertising only to the target audience, evaluate its effectiveness and quickly change the main focuses of the advertising company.

The disadvantages of Internet marketing include: the unknown size of the market, the passivity of consumers and ignorance of consumers.

Internet advertising is used to inform users about the Web site of a company. It can exist in the form of several main carriers.

A banner is a rectangular graphic image in GIF or JPEG formats, which is the most common advertising medium. When making banners, two conditions are met that are taken into account by Web designers:

1) the larger the size of the banner, the more effective it is;

2) Animated banners can be more effective than static ones.

A small Web page that is hosted on a Web publisher page is called a mini-site. Mini-sites are usually dedicated to a specific marketing campaign, product or service.

Advertiser information is a snippet of one or more Web publisher pages.

Placing a company's advertising on the Internet helps achieve the following goals:

1) creating a favorable image of your company;

2) widespread access to information about your company to many millions of Internet users;

3) reduction of advertising costs;

4) providing support to its advertising agents;

5) implementation of opportunities to present information about the product;

6) prompt changes to the price list, information about the company or products, prompt response to the market situation;

7) selling your products via the Internet without opening new retail outlets.

There are two methods for determining the effectiveness of online advertising:

1) study of server statistics and the number of hits to advertising pages;

2) a survey of the potential audience to determine the degree of familiarity with the advertised company.

These methods can be used alone or combined to improve the objectivity of the assessment.

Topic 7. Basics of working with general-purpose application programs

7.1. Definition of application programs

An applied program is any specific program that contributes to the solution of a specific problem within a given problem area. For example, if a computer is assigned the task of controlling the financial activities of a firm, the application for this case will be a program for preparing payrolls. Some application programs are general in nature, i.e. they provide the compilation and printing of documents, etc.

Unlike application programs, OS or tool software does not directly contribute to the satisfaction of the end user's needs.

Application programs can be used either autonomously, i.e., solve the task without the help of other programs, or in a system of software systems or packages.

7.2. Text editors

A text editor is a software tool used to prepare text documents.

When executing various business documents on a computer, it is necessary to use text editors that occupy an intermediate position between the simplest editors and publishing systems.

Typing in a text editor should consider the following:

1) mouse and cursor pointers do not match. The mouse pointer usually resembles an arrow. As the pointer moves over the text-filled portion of the screen, the appearance of the pointer changes;

2) the cursor pointer is always located in the text field of the document, it is a blinking vertical line;

3) the marker of the end of the text is a thick horizontal line at the end of the typed text.

When preparing text in a text editor, after typing, you should edit it. Editing is called setting the size of the sheet, highlighting headings, defining a red line in paragraphs, inserting figures, objects, etc. If the text is being prepared for presentation in hypertext form, then editing should include entering appropriate means into the text in HTML format. In MS Office 97 such possibilities exist.

You can call various editor functions using the mouse or special key combinations. Working with the mouse is considered the most natural, but the use of some combinations of "hot keys" significantly speeds up the work.

The main menu is used to control the editor. Panels serve as an additional tool for managing a text editor: a standard toolbar, editing and formatting toolbars, etc.

In order to speed up the work, these panels are provided with buttons that duplicate the various actions performed in the text editor using the main menu options. When calling each menu item, a submenu appears on the display screen, which specifies the actions of the editor. These actions can be performed by selecting this menu item.

To install the required font, perform the Format / Font sequence, leading to the appearance of a window in which you should select the font type and letter size. The correct choice of font type and size is reflected in the nature of the text and depends on the experience of working with the editor.

The font is a combination of letters, numbers, special characters, which are designed in accordance with uniform requirements. The drawing of a font is called a typeface. Fonts differ in style, and the font size is called the point size.

In order to perform any operations in a certain fragment of text, you must first mark or select this fragment. After that, the necessary parameters are changed.

The basis of text editing is editing headings and paragraphs. To do this, select the Format / Paragraph options, and after the window appears on the screen, the necessary action.

When setting the distance between lines in a paragraph, you must use the Line spacing window, where single, one and a half, double or other spacing is set.

A red line is used to highlight a paragraph; the size of the cursor movement during tabulation can be set using the ruler, which is located under the control panels. In order for the ruler to appear on the screen, you must activate it in the View menu item. When the ruler is activated, place the cursor in the appropriate place and press the left mouse button. After that, a special character appears that determines where the cursor jumps when the tab key is pressed.

7.3. Table processors

A spreadsheet processor is a set of interrelated programs designed to process spreadsheets.

A spreadsheet is the computer equivalent of a regular table, consisting of rows and columns, at the intersection of which there are cells containing numerical information, formulas or text. The value in the numeric cell of the table is either written down or calculated using the appropriate formula. Formulas may contain references to other cells.

With any change in the value in the cell of the table, the implementation of writing a new value into it from the keyboard, the values ​​in all those cells in which there are values ​​that depend on this cell are also recalculated.

Columns and lines can have their own names. The monitor screen is a window through which you can view the table as a whole or in parts.

Spreadsheet processors are a convenient tool for accounting and statistical calculations. Each package includes hundreds of built-in mathematical functions and statistical processing algorithms. At the same time, there are powerful tools for linking tables to each other, creating and editing electronic databases.

Using specific tools, you can automatically receive and print customized reports and use dozens of different types of tables, graphs, charts, provide them with comments and graphic illustrations.

Spreadsheet processors have a built-in help system that provides the user with information on each of the specific menu commands and other reference data. With the help of multidimensional tables, you can quickly make selections in the database according to any criterion.

The most popular spreadsheet processors are Microsoft Excel (Excel) and Lotus 1-2-3.

In Microsoft Excel, many routine operations are automated; special templates allow you to create reports, import data, and much more.

Lotus 1-2-3 is a professional spreadsheet processor. Great graphical capabilities and a user-friendly interface of the package help you quickly navigate it. Using this processor, you can create any financial document, a report for accounting, draw up a budget, or even place all these documents in databases.

7.4. The concept of wrappers

The most popular shell among users of an IBM-compatible computer is the Norton Commander software package. Its main task is to perform the following operations:

▪ creating, copying, forwarding, renaming, deleting, searching for files and changing their attributes;

▪ display of the directory tree and the characteristics of the files that are part of it in a form convenient for user perception;

▪ creating, updating and unpacking archives (groups of compressed files);

▪ viewing text files;

▪ editing text files;

▪ Execution of almost all DOS commands from its environment;

▪ launching programs;

▪ issuing information about computer resources;

▪ creating and deleting directories;

▪ support for intercomputer communication;

▪ support for email via modem.

At the end of the XX century. All over the world, the MS-Windows 3.x graphical shell has gained great popularity, the advantages of which are that it facilitates the use of a computer and its graphical interface, instead of a set of complex commands from the keyboard, allows you to select them with the mouse from the menu program in a matter of seconds. The Windows operating environment, which works in conjunction with the DOS operating system, implements all the features necessary for the user's productive work, including multitasking.

The Norton Navigator Shell is a collection of powerful file management and Windows enhancements. This program helps to save time on almost all operations: searching for files, copying and moving files, opening directories.

7.5. Graphic editor

A graphics editor is a program designed to automate the process of building graphic images on a computer screen. With its help, you can draw lines, curves, paint areas of the screen, create inscriptions in various fonts, etc. The most common editors allow you to process images that were obtained using scanners, as well as display pictures in such a way that they can be included in document prepared with a text editor.

Many editors are capable of obtaining images of three-dimensional objects, their sections, spreads, wireframe models, etc.

With CorelDRAW, which is a powerful graphics editor with publishing features, graphics editing and XNUMXD modeling tools, it is possible to obtain a three-dimensional visual representation of various types of inscriptions.

7.6. The concept and structure of the data bank

A databank is a form of organization of storage and access to information and is a system of specially organized data, software, technical, language, organizational and methodological means that are designed to ensure centralized accumulation and collective multi-purpose use of data.

The data bank must meet the following requirements:

▪ satisfy the information needs of external users, provide the ability to store and change large volumes of various information;

▪ comply with the specified level of reliability of the stored information and its consistency;

▪ access data only to users who have the appropriate authority;

▪ be able to search for information by any group of characteristics;

▪ meet the necessary performance requirements when processing requests;

▪ be able to reorganize and expand when software boundaries change;

▪ provide users with information in various forms;

▪ guarantee simplicity and convenience for external users to access information;

▪ be able to simultaneously serve a large number of external users.

The data bank consists of two main components: a database and a database management system.

The core of the data bank is the database, which is a collection of interrelated data that is stored together with minimal redundancy so that it can be used optimally for one or more applications. In this case, the data is stored in such a way that they are independent of the programs using them; to add new or transform existing data, as well as to search for data in the database, a common managed method is used.

The following requirements are imposed on the organization of databases:

1) easy, fast and cheap implementation of database application development;

2) the possibility of multiple use of data;

3) saving the costs of mental labor, expressed in the existence of a program and logical data structures that are not altered when changes are made to the database;

4) simplicity;

5) ease of use;

6) flexibility of use;

7) high speed of processing unplanned requests for data;

8) ease of making changes;

9) low costs; low cost of storing and using data and minimizing the cost of making changes;

10) low data redundancy;

11) productivity;

12) reliability of data and compliance with one level of updating; it is necessary to apply control over the reliability of data; the system prevents different versions of the same data elements from being available to users at different stages of updating;

13) secrecy; unauthorized access to data is impossible; restriction of access to the same data for different types of their use can be carried out in different ways;

14) protection from distortion and destruction; data must be protected from failures;

15) readiness; the user quickly receives data whenever he needs it.

In the process of creating and operating a data bank, users of different categories participate, with the main category being end users, i.e. those for whose needs the data bank is being created.

7.7. Organizer programs

The organizer program is designed to provide effective time planning for a business person. It is used both in standalone mode and in shared mode.

This program allows you to store, schedule and manage information about events, appointments, meetings, tasks and contacts.

An event is an event that takes place in the time interval of a day or more, for example, a birthday.

A meeting is an event for which time is reserved, but no resources and persons are assigned, such as a conversation, lecture, etc. Meetings can be one-time or recurring.

A meeting is a meeting for which resources are assigned and people are invited, such as a meeting.

A task is a set of necessary requirements that must be met.

A contact is an organization or person with whom a connection is maintained. Typically, information is stored on contactees, which may include job title, postal address, telephone number, etc.

The program has the ability to use notes and diaries. Notes are the electronic equivalent of a loose-leaf paper notepad. The diary is a means of storing important documents, accounting for various actions and events.

When planning, the schedule includes an indication of notification of each of the specific events, and this allows you not to forget about an important event. Contact details can be easily found, read and updated in the organizer; it also stores information that is used to generate an electronic address of any type. Microsoft Outlook is a convenient tool for working with e-mail. The user of this program in the teamwork mode grants access rights to someone else's schedule for scheduling meetings and appointments.

There are the following types and modes of operation:

▪ with mail folders, which include folders for incoming, outgoing, sent and deleted messages;

▪ calendar in the most user-friendly view. For example, review the schedule of planned activities, meetings and events, plan your own schedule;

▪ address information about any individual or legal entity;

▪ a diary in which information about completed contacts, meetings, assignments, open files, etc. is automatically entered;

▪ notes to remind you of what is happening;

▪ using it as a Conductor.

Microsoft Outlook can be launched in one of two ways: by clicking the Start button, selecting Programs, and then Microsoft Outlook, or by using the Microsoft Outlook button on the MS Office panel.

The Microsoft Outlook window is divided into two parts by a vertical bar. The Microsoft Outlook panel on the left contains icons for program elements: Diary, Calendar, Contacts, Notes, Tasks. On the right is the work area, the contents of which change when you click on one of the icons on the left. You can see other icons as you scroll to the left. To select the Inbox folder on the screen, click the Mail icon. By clicking on the icon Other folders, you can see the contents of the folders of the hard disk file structure.

You can hide the Outlook bar by right-clicking on it and selecting Hide Outlook Bar from the context menu. To navigate between Outlook items, click the arrow to the right of the folder name and select the required Outlook item from the list. You can also navigate through items sequentially using the Previous and Next buttons on the toolbar.

7.8. Presentation programs

You can create presentations using the AutoContent Wizard. To do this, after clicking on the Power Point icon in the Microsoft Office panel, you must wait for the main program window to appear and the Helpful Hint dialog box, which contains information that can help with further work on the presentation. By clicking the Next button in this window, it is possible to read the next tip, and by clicking the OK button, close the window. Once the dialog box has closed, PowerPoint offers several ways to create presentations: using the AutoContent Wizard, a presentation template, or just a blank presentation. It is also possible to open a file of an already existing presentation.

If the user is not familiar with how to develop presentations, then it is better to use the help of the AutoContent Wizard. To do this, select the appropriate radio button and press the OK button in the above window. As a result, six dialog boxes will appear on the screen in succession, in which it is possible to set the main characteristics of the presentation being created.

The AutoContent Wizard advances to the next dialog box when you click the Next button, and returns to the previous window when you click the Back button.

In the second window, in which data is entered for the design of the title slide, data about the user, the name of the company, some kind of motto, etc. are entered. This information is placed on the title slide.

The most important is the third window of the AutoContent Wizard, which is called the Presentation Type Selection window. It provides the following presentation types:

1) strategy recommendation;

2) selling a product, service or idea;

3) training;

4) report on achievements;

5) reporting bad news, etc.

Assume that the type selected is Sell a product, service, or idea. The content should talk about the benefits of this product, service or idea, compare it with competitors, etc.

If no suitable topic is found in this window, click the Other button to get a list of presentation templates. After selecting a presentation template, you must click the Next button and go to the last window of the AutoContent Wizard. Otherwise, in the fourth window, you should select the presentation style and set the duration of your speech. The fifth window defines how the presentation will be given out and indicates whether a handout is needed. Finally, the sixth PowerPoint window informs you that the preliminary work on creating the presentation is completed, and prompts you to click the Finish button. After a certain time, the title slide of the presentation will appear on the computer screen. In order not to lose the results of your work, you should save the presentation in the appropriate folder by calling the Save command on the File menu.

The PowerPoint system allows the user to work and view information in various ways. The type of work being done determines the appropriate type of presentation, which greatly improves the usability. There are five such types, and their establishment is carried out by pressing one of the buttons at the bottom of the main program window.

The slide view is most convenient when each slide is gradually formed, a design is chosen for it, text or graphics are inserted.

The structure type must be set to work on the text of the presentation. In this case, it is possible to view the titles of all slides, all the text and structure of the presentation.

The Slide Sorter view is the most convenient for adding transitions and setting the duration of the slide on the screen. In addition, in this mode, you can swap slides in places.

Notes view is used to create notes for the report.

The demo is used to see the results of the work. In this mode, the slides are displayed one by one on the screen. The required view is set using commands from the View menu.

Your presentation will look better if you design all of its slides in the same style. However, it often becomes necessary to place the same design element on all slides, so in PowerPoint it is possible to set the same design for all slides and pages. This is done in sample mode.

To enter this mode, select the Sample command in the View menu, and in the opened submenu - the presentation element, the sample of which should be corrected as you wish.

There are two commands for slides in the menu - Slide Master and Title Master. The second command is used to define the title slide master, the appearance of all other slides in the presentation depends on the slide masters.

After selecting the Slide Master command, you can see that in each area of ​​the slide there is a hint about what you need to do to make any changes to the master. It is possible to set the type, style and size of the font, set the parameters of paragraphs, change the size of the areas of the sample, place a picture in it or draw some graphic element. In this case, all the elements in the master will appear on each slide of the presentation, and the changes made will immediately be reflected in all other slides.

Thus, in PowerPoint it is possible to create an individual design and define elements that should be the same for the entire presentation.

If the dialog box that opens when you call PowerPoint, or the presentation file that the user worked with, has closed, then in order to create a new presentation, you should call the New command from the File menu. After that, the Create Presentation window appears on the screen with the Presentation Designs section active. In this dialog box, you should set the presentation design template. When you click on one of the templates, its image appears in the Preview window. After selecting a template, you must double-click on it, after which the Create Slide dialog box will open. In the Select Auto Layout area, you need to define an auto layout for the slide you are creating. In the lower right corner of the window is its main and brief characteristics. After double-clicking on the Auto Layout sample, a new slide containing placeholders will appear on the screen.

The window for creating a new slide is opened by selecting the New Slide command from the Insert menu or by activating the key combination Ctrl + M.

PowerPoint presentations may include multimedia (sound bites, videos, etc.).

7.9. Working on the Internet with MS OFFICE 97 applications

The Internet is capable of supporting all components of MS Office 97. With Word 97, you can convert traditional DOC files into HTML Web pages. Power Pointl 97 allows you to create presentations for sending via the WWW, and Excel 97 allows you to export the worksheets it has created to HTML tables.

In addition, the list of available Internet sites may include FTP sites. If the enterprise uses a corporate intranet, then documents can be opened directly in it. Just like the Internet, intranets use a viewer and communication software. Some of these networks allow you to access the Internet through a secure gateway called a firewall. If you have the appropriate access rights and if the FTP site supports saving files, documents can be saved to the Internet using the Save Document dialog box of MS Office programs.

Using Microsoft Excel, Word, Power Point and Microsoft Access, you can view hyperlinked MS Office documents and determine their location. In MS Office documents, to work with hyperlinks, you must have access to the Internet.

MS Office programs make it easier to view hyperlinked documents using the Web toolbar, which can be used to open the start page or search page in the Web Viewer. The Web Toolbar helps you place documents you find on the Web that you want to use in your Favorites folder for quick access. Panel 1 Web contains a list of the 10 most recent documents that were opened using the Web panel or hyperlinks. The list provides the ability to quickly return to these documents.

Web pages that include hyperlinks, data, tables, and charts in Excel 97 worksheets can be created using Microsoft Office applications.

Hyperlinks are shortcuts that allow you to quickly switch to another book or file. Switching is carried out on the user's computer files, on the Internet and WWW; hyperlinks are created from text cells or graphic objects such as shapes or pictures.

Office 97 combines two information technologies that define a new model of working with a computer. The first is based on the fact that information can be placed anywhere - on a local hard disk, in a local or corporate network or the global Internet; the second is that users really do not work with applications, but directly with documents and the information contained in them.

There are two ways to work:

1) work with Office applications with periodic requests in an intranet company or the Internet for the necessary Web page (document, add-in) for the application or additional information about the program;

2) work inside Internet Explorer, its use as the only environment in which you can view and modify any document located on the user's disk, on the company network or the Internet.

Office 97 and Internet Explorer form a single universal tool that allows you to view and edit documents, and this makes it possible to find, view and edit any information.

When using an Internet browser that allows you to navigate between Web pages and display them on the screen, you can find a Web page or document in three ways:

1) enter the address manually;

2) click on a text or graphic hyperlink that will request the page you are looking for;

3) click on a link that is stored in the log or node list.

7.10. Stages of solving problems using a computer

Solving problems using a computer should consist of the following main steps, some of which are carried out without the participation of a computer.

1. Statement of the problem:

▪ gathering information about the task;

▪ expression of the problem conditions;

▪ identifying the ultimate goals of solving a problem;

▪ establishing a form for issuing results;

▪ description of data (their types, ranges of values, structure, etc.).

2. Analysis and study of the task, task models:

▪ study of existing analogues;

▪ study of hardware and software;

▪ development of a mathematical model:

▪ development of data structures.

3. Algorithm definition:

▪ Establishment of the algorithm design method;

▪ identifying the form of writing the algorithm (flowcharts, pseudocode, etc.);

▪ definition of tests and testing method;

▪ development of an algorithm.

4. Programming stage:

▪ definition of a programming language;

▪ choosing ways to organize data;

▪ registration of the algorithm in the selected programming language.

5. Testing and debugging phase:

▪ syntactic debugging;

▪ Debugging semantics and logical structure;

▪ test calculations and analysis of test results;

▪ improvement of the received program.

6. Consideration of the results of solving the problem and, if necessary, refinement of the mathematical model with repeated execution of steps 2-5.

7. Maintenance of the program:

▪ refinement of the program to solve specific problems;

▪ compilation of documentation for a solved problem, mathematical model, algorithm, program, set of tests, use.

However, not all tasks require a clear sequence of these steps. Sometimes their number may change.

Topic 8. Specialized professionally oriented software tools

8.1. Information systems of organizational and economic management

A system is an organized set that forms an integral unity, which is aimed at achieving a specific goal.

The purpose of the system of organizational and economic management is the optimization of organizational management, i.e., ensuring the maximum economic efficiency of its activities within a specific area of ​​activity (achieving the maximum difference between income and costs). These systems differ from organizational management systems in other areas (in particular, health care, public education), where other goals are pursued: ensuring a high life expectancy and health of the population, a quality level of education, etc.

The task of organizational management is the disaggregation of management functions within the organization.

Management functions in organizational and economic management systems are classified as follows:

1) by management stages - forecasting, analysis of production and economic activities, medium-term planning, short-term planning, operational management, audit, accounting, etc.;

2) types of production and economic activities - the main production, logistics, auxiliary production, transport, capital construction, financing, accounting, social development, etc.;

3) management levels - ministry, association (firm), enterprise (organization), workshop (department), which includes individual jobs of the performer, etc.

The formation of management functions is carried out taking into account the three main features of the functional specification. In the production sphere of activity, the allocation of management functions most often corresponds to the elements of the production process.

Control features include:

1) management of material resources;

2) human resource management;

3) financial resource management, etc.

In order to formulate tasks, the characteristics of the corresponding control functions are used, among which there are three more features that characterize the task itself:

1) belonging to a specific control object;

2) technological method for solving the problem;

3) the result of management activities.

Logistics functions can be implemented when solving the following problems:

1) planning the need for material resources;

2) concluding contracts with suppliers;

3) operational control over the execution of supply contracts;

4) accounting for supplies and settlements with suppliers, etc.

Management is a purposeful impact of the controls on the managed object and is a function of the system, which is focused either on maintaining its main quality in a changing environment, or on the implementation of some target program that ensures the stability of its functioning when a certain goal is achieved. There is another definition, according to which management is a function of organized systems, which ensures the preservation of their structure, maintenance of the mode of activity, the implementation of its program, goals.

Information is a measure of the elimination of uncertainty about the outcome of an event of interest.

Data are material objects of arbitrary form, acting as a means of providing information. Information is otherwise called knowledge about a particular subject, process or phenomenon.

Effective management of economic systems is impossible without the availability and analysis of information, processing of available data. This function is taken over by special software that helps to effectively carry out the control function.

8.2. Modern information technologies in organizational and economic management systems

The system of methods of processing, manufacturing, changing the state, properties, form of raw materials, materials or semi-finished products, which are carried out in the process of producing the final product, is called technology.

In practice, technology characterizes what, how and how much to do to obtain a material or thing with desired properties. From a scientific point of view, technology is the science of the laws of implementation of purposeful influences in various spheres of human activity. Determining the patterns of construction of production processes, the transition from the logical construction of projects to the processes of manufacturing finished products with useful functions and properties is the task of technology as a science.

Information technologies are technological processes that cover the information activities of managerial employees, which is associated with the preparation and adoption of managerial decisions.

The peculiarity of information technologies is that they include the processes of collecting, transmitting, storing and processing information in all its possible forms. Such types of manifestation include textual, graphic, visual, speech information, etc.

The development of new technical means, the discovery of new concepts and means of organizing data, their transmission, storage and processing leads to the constant development and improvement of information technologies. To ensure effective interaction of end users with the computer system, new information technologies use a fundamentally different organization of the user interface with the computer system. Such a system is called a friendly interface system and is expressed as follows:

1) the user's right to make a mistake is ensured by protecting the information and computing resources of the system from unprofessional actions on the computer;

2) there is a wide range of hierarchical menus, hint and training systems, etc., which facilitate the process of user interaction with the computer;

3) there is a "rollback" system that allows, when performing a regulated action, the consequences of which for some reason did not satisfy the user, to return to the previous state of the system.

The knowledge base is the most important element of the expert system, which is created at the workplace of a management specialist. Such a base is a store of knowledge in a particular area of ​​professional activity and acts as an assistant in the analysis of the economic situation in the process of developing a managerial decision.

Now information technologies in the field of organizational and economic management are developing in certain main areas, thanks to which it is possible to increase the efficiency of their use. Among these areas are:

▪ intensifying the role of management specialists in preparing and solving problems of economic management;

▪ personalization of computing based on the use of a computer and related software and tools;

▪ improvement of intelligent interface systems for end users at different levels;

▪ combination of information and computing resources using computer networks of various levels;

▪ development of comprehensive measures to protect information and computing resources from unauthorized access and distortion.

Ensuring the greatest economic efficiency from the use of information technologies in the field of organizational management can be achieved in the case of the creation of automated information systems.

8.3. Information systems of organizational and economic management

In order to reveal the concept of "information system", one should proceed from two aspects:

1) the purpose of the creation and operation of the information system. Here, each information system should supply information that helps to remove uncertainty from management and other interested parties when approving management and other decisions regarding the facility;

2) taking into account the real conditions in which the goal is achieved, i.e., all external and internal factors that determine the specific features, the individuality of the object.

The information system of an object is a complex of interrelated components. These components describe various aspects of the information activity of the object in the implementation of management functions within its organizational and managerial structure.

To separate information systems, classification criteria were previously adopted according to the degree of automation of functions:

▪ information and reference (factual);

▪ informational and advisory (documentary);

▪ information managers.

At the moment, this division is perceived somewhat simplified. This is due to a number of reasons.

1. The principles of associative search using semantic maps can be the basis for the functioning of modern factographic systems. The main thing that such systems have in common with elementary factographic systems is that they provide the output of only the information available.

2. Based on the available information, documentary systems form one or more possible solutions, and the final choice is left to the human user. The choice of such systems is extremely wide: from solving elementary problems of direct counting and multivariant optimization problems to expert systems.

3. Information management systems are considered the highest level of automation and can use algorithms that are quite easy to implement, for example, automatic notification of suppliers (payers, debtors) by comparing the current date and all actual receipts at the current time with those planned for that moment.

In reality, such systems can function not only independently, but also jointly, complementing each other.

The fundamental classification of information systems in the field of organizational management can be supplemented with the following classification:

1) according to the method of automation of controls:

▪ autonomous automated workstations for management specialists;

▪ autonomous local networks that unite functionally interconnected automated workstations of managers;

▪ a unified network of the organization, including its parent structures and geographically remote branches;

2) by types of automated management functions:

▪ functional (automating accounting, personnel, planning management functions, etc.);

▪ administrative (automating office work, document flow, etc.);

▪ comprehensive (covering all types of management activities);

3) by level of specialization:

▪ specialized;

▪ adaptive universal;

▪ general management;

4) by the nature of the relationship with the external information environment:

▪ closed (without automated interaction with external information systems);

▪ open (with access to publicly accessible information systems);

▪ extrasystems (fully functionally interacting with a certain range of external information systems).

8.4. Office activities in organizational and economic management systems

The concept of an office includes material and organizational aspects, while in the first case we mean premises and equipment, and in the second - the forms and structure of management. The office is most often either an independent institution, or may be part of a larger organizational structure. The peculiarity of the work of the office is that it is a source not only of final information services, but also of decisions that limit the behavior of people or the distribution of material resources. The main task of the office is to develop solutions that have value for the client. In addition, the office is an information enterprise that transforms information resources into information products.

The process of using computer and other organizational equipment in the office includes several stages: traditional office, production office, electronic office.

The traditional office consists of a relatively small team of people with a fairly wide range of responsibilities. The typical composition of work operations in the office includes: preparing materials, printing, maintaining file cabinets, reconciling documents, working with mail, searching for information, maintaining information funds, performing calculations, conducting business conversations on the phone, working at the terminal.

The production office is characterized by large volumes of the same type of work, its clear formalization, and a rigid distribution of employee functions. In such an office, the essence of automation lies in the formation and maintenance of large information funds, their systematization, and the production of data samples.

An electronic office is a realization of the concept of comprehensive use of computer and communication tools in office activities while developing the traditions of previous forms of activity. The main functions and means of the electronic office are: providing access to documents without their duplication on paper; acceptance of documents, their control and execution; remote and joint work of employees on a document, e-mail; personal data processing; preparation of documents and their reproduction; exchange of information between databases; automation of control over document management; organization of electronic document management; information support for decision making; participation in meetings using remote access tools; work with automated information systems, etc. With the help of e-mail, PCs and computer networks, an electronic office is able to expand the range of direct interaction between people, without requiring them to actually be in the same room.

The nature and purpose of the organization's activity is influenced by its information system, the type of information product produced and processed. If the organization's task is to produce an information product in the form of documents, then for it the most important element of activity is the storage of information related to the specifics of the activity and necessary for making managerial decisions. Such information organizations include, for example, notary offices, travel agencies, news agencies. For supply and sales offices, it is important to know the sales markets, product manufacturers, and product prices. The main information needs of offices can be met with the help of standard hardware and software tools, including software tools for textual, tabular and graphical information processing, PCs and tools for online reproduction of documentation, and electronic communication tools.

8.5. Organizational, technical and peripheral means of information systems

Any information system must have adequate means of collecting primary data that accurately reflect the state of the subject area and the processes taking place in it. In financial and credit organizations, the amount of loans issued is calculated, the amount of interest payable is determined, and the number of banknotes is calculated. At industrial enterprises, the amount of raw materials and materials received from outside is calculated; time spent on the operation of production and transport equipment; electricity consumption, etc.

When conducting economic or administrative activities, it is necessary to fix the properties inherent in the object on which the action is performed. The object must be identified, measured, determined in time, marked with additional specific characteristics. The identifier may be the inventory number of the production equipment.

Each of the processes of obtaining and short-term storage of data can be implemented using various technical means. Measuring instruments and counters are used to calculate physical quantities, and recorders, information to which can be received automatically from sensors, record and control the operation of equipment, the state of climatic and chemical processes, etc. As a comprehensive means of collecting and recording primary data, you can use specialized automated information collection systems and PC.

The means of registering information and creating documents include copiers, printers, etc. Among the main technical characteristics of copiers, there are: copying speed; maximum size of original and copy; the admissibility of scaling; the presence of an automatic paper feeder and the possibility of automatic layout of copies; guaranteed amount of copying.

Means of information storage include office equipment (storage of paper documents), file cabinets, cabinets or racks of various designs (storage of folders), special boxes, cases, boxes (storage of machine media), etc.

Means of operational communication and information transfer provide information exchange processes both between internal objects of the organization and with external ones. Intra- and inter-institutional means of communication and information transmission allow the reproduction and forwarding of messages in speech, visual, sound and documented forms. Among them are telephone and fax machines, pagers, video monitoring and recording installations and systems, etc.

Document processing facilities include machines for bookbinding, physical destruction of documents, applying protective coatings to documents, sorting, counting documents and other technological procedures.

Folding and collating machines, cutting and fastening devices are used to automate bookbinding and bookbinding. Folding machines help in preparing documents for folding into envelopes or notebooks; collating machines allow you to mechanize the selection of documents; cutting devices are divided into paper cutting equipment and envelope opening devices. Trade enterprises often use electronic cash registers and cash registers.

8.6. Concept of business graphics

The branch of computer science related to the creation and use of graphic image processing tools is called computer graphics.

A drawing image that is usually associated with text is an illustration, or text decoration. Illustrations are divided into numerical and text. The quantitative side of economic phenomena can be characterized by illustrations of numbers (indicators); text illustrations describe the undigitized qualitative residue. For the production of illustrations of indicators, diagrams, color and tone shading and other ways of displaying indicators on geographical maps are used. Among text illustrations, illustrations of concepts stand out. They are intended for graphical interpretation of economic abstractions. Usually concepts are presented in textual form, i.e. verbally. The illustration helps to supplement the verbal form of the concept, facilitate its comprehension, and contributes to the identification of new information. For example, the intersection of concepts can be illustrated with circles superimposed on each other.

Text is the primary type and means of aggregating data using the OLE engine and its network extensions. It can be linear or non-linear, such as a table, databases, hypertext, etc.

Text formatting tools for the use of graphics are divided into traditional and non-traditional. The traditional ones include character design tools and text backgrounds. Character design tools can be divided into four groups:

1) typeface, which is an individual unique look of the font;

2) style, which is a set of underlining, volume, animation, etc.;

3) color palette, which is a standard palette of sixteen colors plus silver and grey;

4) character density - horizontally and vertically.

Headsets are divided into three groups according to the level of graphics application:

1) simple (strictly shaped), having the same width, such as Courier and two types of proportional ones - chopped (Arial) and serifed (Times);

2) special (specially designed), usually handwritten, Slavic, etc.;

3) thematic sets of drawings - Wingdings fonts, etc.

Text background design tools consist of four main groups:

1) a pattern, which is a certain set of hatching methods;

2) the color of the pattern, which is a standard set of colors;

3) background color, which is a standard palette with additional shades of black;

4) a border around the text.

Framing options are determined by text units. For example, a fragment can be bound by a frame; paragraph and page - with the help of a frame and a dash. The border of a paragraph and a fragment is separated by straight lines, and the pages are also separated by drawings. In this case, the border can be set to be three-dimensional, with a shadow, etc.

Non-traditional design tools are used in the design of title pages, section headings and other short texts - inscriptions. The inscription, also called an envelope, can be deformed. To do this, it is performed voluminous and with a shadow. It is created as a Windows object with two features:

1) when changing its size, the font size changes;

2) it is impossible to set the boundaries of the typesetting field, i.e. the text is forced to a new line.

For this reason, inscriptions are called graphic, curly text. Curly text in MS Office 95 is created using the WordArt program. It can be a circular, ring, petal inscription. The WordArt program is launched by a button on the Drawing panel, which expands the traditional options for controlling the background of text and images.

8.7. Use of graphics in business

Commercial graphics tools are used to solve analytical and psychological problems. The analytical task is a kind of help in the search for rational, i.e., sufficiently profitable and reliable solutions. The psychological task is necessary in order to provide the document with solidity, persuasiveness, and contribute to its coordination and approval.

Visual presentation of commercial indicators, such as business documents, helps to convince investors, contributors, sponsors and others of the correctness of commercial policies, incentives for capital investments, etc.

The main part of information in commercial documents is indicators of profit, profitability, risk, etc. One of the main tasks of commercial graphics is to combine indicators into a table that facilitates comparison and discussion of indicators.

On the diagrams, various economic indicators are displayed as dots and other geometric figures of proportional size. With the help of diagrams, the task of visualization of the main economic indicators is more feasible. Charts come in pie, line, and bar charts. The same chart can show the same metrics at different times or different types of metrics.

Commercial and geographical facts are often linked together, so they are better perceived against the backdrop of a geographical map. In this case, coloring is used.

Economic-mathematical graphics make it possible to make a favorable impression on potential investors, and this, in turn, favors the coordination of commercial documentation and the conclusion of profitable agreements.

The curly design of commercial texts makes it possible to make the text of a business document as clear and expressive as possible, and well-formed information acts similarly to a respectable appearance when meeting.

Using the Drawing panel it is possible to carry out:

▪ controlling the text outline as a picture, creating a shadow (volume);

▪ placing text within the image outline and rotating the text;

▪ incorporating an image into text with various wrapping options.

Among the means of automated illustration are:

▪ a multimedia information retrieval system, including commercial topics, transport, etc.;

▪ an image editing mechanism that can provide disassembly, shading tools, color models, palettes and smooth shading templates.

Using the tools listed above allows a novice user to prepare illustrations for complex commercial concepts and phenomena in a short period of time. For example, such as the dependence of the frequency of risks on their severity, market segmentation according to a set of criteria, etc. This can be done using a colorful three-dimensional table, a visual diagram, etc.

Unlike a literary text, a commercial text has a strict structure. It may include the following graphic elements:

▪ network work schedules (generalized, alternative);

▪ technological structures (instructions for coordination and decision-making, schemes for calculating indicators);

▪ classification schemes;

▪ organizational structures of institutions, organizations;

▪ target program schemes.

The use of multimedia tools, namely animation and sounding of images, is the core of the technology of computer presentations and demonstrations. With their help, it is possible to bring the document closer to live communication, to make it more intelligible and expressive. This, in turn, allows you to make a presentation or business report more lively and visual.

LAN graphics services include:

▪ sharing of images on fixed and removable disks and pages of local clipboards, i.e. the owner of the image can control access to it using passwords;

▪ collective review and editing of images along a closed mail route;

▪ collective preparation of images.

8.8. MS GRAPH business graphics program

Color samples of diagrams are given in the built-in directories of Word, Excel, Access programs. For any user, there are two main ways to build charts:

1) using the Wizard (in Excel, Access). To do this, click the button on the standard toolbar. If it is not in Excel, you should set the panel to its default state, and if the button is not in Access, drag it from the Elements category on the Control Commands tab of the Panel Setup window;

2) by means of the Object/Insert command and selecting the launch method.

Launch methods include:

▪ direct download. In this case, the MS GRAPH window appears with an example table and diagram. Then you need to correct the data, chart type and format it, and if the table is prepared in advance, it should be highlighted before loading MS GRAPH;

▪ download using Excel, after which an Excel window opens with two sheets.

In MS GRAPH it is possible to create a diagram of a strictly defined type, only the template parameters are changed in an arbitrary order. It is necessary to group diagrams according to the method of displaying indicators, the type of coordinate system, and its properties. The construction of the diagram is carried out in rectangular, polar and bubble coordinate systems.

The coordinate is a constant that indicates the position of the indicator in the space of valid values. It can be three-dimensional (bubble), two-dimensional (petal) and one-dimensional (circular). The dimension of the coordinate system is the number of constants required to identify the indicator. The bubble coordinate system has a third dimension - the size of the bubble.

Finding out the structure of the diagram is possible in one of four ways.

1. Select a diagram. Use the arrow keys to view the chart element names in the Formula Bar Name field.

2. Select a chart, view the list of the Chart Elements field on the Chart toolbar.

3. Select a diagram, execute the command Diagram / Diagram Options and examine the contents of the window of the same name.

4. Double-click on an element and examine the contents of the Format/Data element name window.

Series in charts are dots, bars, and other representations of the columns and rows of a table.

Numeric axes are value axes that are selected from the columns or rows of a table. They are arranged vertically, horizontally or at an angle in the radar chart.

In economics, a category performs the function of a section of an indicator or its level, and a category in a diagram functions as the names of columns or rows of a table on one of the axes that correspond to numbers on the other axis. Some charts do not have category axes, such as pie, donut, radar. A XNUMXD histogram has two category axes.

A legend is a notation for chart elements.

Some charts can use special value axes to represent series in different scales or units. For example, rates and sales volumes of securities, prices and sales volumes in natural units. When there is a large range of values, the more compact logarithmic axis is most convenient.

All diagrams show the processes of changing the series of indicators and their correlation.

Trends are detected by smoothing out random fluctuations in a series of indicators. They are used to study mechanisms, phenomena and predict their development. There are two methods of smoothing: graphic and graphic-analytical. In the first case, you can get a trend graph, in the second - a graph and statistical estimates of the trend. There are three graphic-analytical methods:

1) trend equations, 2) moving average, 3) exponential average.

8.9. General characteristics of the technology for creating applied software

Solving a problem on a computer is a process of obtaining resultant information based on the processing of initial information through the use of a program composed of commands from the control system of a computer. The program is a normalized description of the sequence of actions of certain computer devices, depending on the specific nature of the conditions of the problem.

Technologies for developing programs for solving a problem depend on two factors:

1) whether the development of a program for solving the problem as an integral element of a unified system of automated information processing is being developed. Otherwise, as a relatively independent, local component of a common software package that provides a solution to computer control problems;

2) what software and tools are used to develop and implement tasks on a computer.

Software tools are software components that allow you to program the solution of control problems. They include:

1) algorithmic languages ​​and their corresponding translators;

2) database management systems (DBMS) with language programming tools in their environment;

3) spreadsheets containing their customization tools.

The process of solving applied problems consists of several main stages. The first step is to set the task. At this stage, the organizational and economic essence of the task is revealed, i.e. the goal of its solution is formulated; the relationship with other previously studied tasks is determined; the periodicity of its solution is given; the composition and forms of presentation of input, intermediate and result information are established; describes the forms and methods of controlling the reliability of information at the main stages of solving the problem; the forms of user interaction with the computer are specified when solving a problem, etc.

Of particular importance is a detailed description of the input, output and intermediate information characterizing the following factors:

▪ type of presentation of individual details;

▪ the number of characters that are allocated for recording details based on their maximum significance;

▪ type of props depending on their role in the process of solving the problem;

▪ the source of the props.

The second stage is the economic and mathematical description of the problem and the choice of a method for solving it. The economic-mathematical description of the problem makes it possible to make the problem unambiguous in the understanding of the program developer. In the process of preparing it, the user can apply various sections of mathematics. For a formalized description of the formulation of economic problems, the following classes of models are used:

1) analytical - computational;

2) matrix - balance;

3) graphic, a particular type of which are network.

By choosing a model class, one can not only facilitate and speed up the process of solving the problem, but also improve the accuracy of the results obtained.

When choosing a method for solving problems, it is necessary that the chosen method:

1) guaranteed the necessary accuracy of the results obtained and the absence of the property of degeneration (infinite looping);

2) allowed to use ready-made standard programs to solve a problem or its individual fragments;

3) focused on the minimum amount of initial information;

4) ensured the fastest obtaining of the desired results.

The third stage is the algorithmization of the solution of the problem, i.e., the development of an original or adaptation of an already known algorithm.

Algorithmization is a complex creative process based on fundamental concepts of mathematics and programming.

The algorithmization process for solving a problem is most often implemented according to the following scheme:

1) allocation of autonomous stages of the process of solving the problem;

2) a formalized description of the content of the work performed at each selected stage;

3) checking the correctness of using the chosen algorithm on various examples of solving the problem.

8.10. Application software

Application software (APS) is a set of software products that are of interest to users and are designed to solve everyday problems of information processing.

An application software package (APP) is a set of programs focused on solving a certain class of problems.

All software is divided into design tools and means of use.

Design tools include software that are designed to create information systems and are used at the workplaces of specialists of various profiles:

1) DBMS - are used to create, maintain and use databases;

2) computer-aided design (CAD) systems - allow solving the problems of drawing and designing various mechanisms using a PC;

3) electronic document management systems - designed to ensure paperless circulation of documents in enterprises;

4) information storages (data banks, knowledge banks) - provide storage of large volumes of accumulated information;

5) geographic information systems - are used to model the processes of development and management of various natural resources, geological exploration, etc.

Means of use are software for processing various kinds of information:

1) word processors and text editors - input, editing and preparation for printing of any documents;

2) spreadsheet processors - creating spreadsheets and performing actions on the data contained in these tables;

3) graphic processors - creation and editing of graphic objects, cartoons and other animation on the computer screen;

4) integrated PPP - the creation of a single business environment in its basis;

5) PPP methods of analysis - solving problems of analysis in a certain area;

6) telecommunication and network programs - maintenance of global and local networks, programs for e-mail;

7) set of economic PPP - use by specialists working in the economic sphere;

8) training and testing programs - obtaining new knowledge, testing in various disciplines, etc.;

9) multimedia software packages - creating, editing and listening to music, viewing and processing video, auxiliary programs (codecs), games;

10) set of application programs - recording and diagnostics of CD-R/RW and DVD-R/RW discs.

8.11. Software systems design technology

The need to create automated information processing systems led to the concept of databases as a single, centralized repository of all information necessary to solve management problems. The concept of databases is theoretically correct. However, in reality, it leads to a significant loss in time, which is required to search and select from the database the information necessary to solve a particular problem. At present, the concept of databases provides a reasonable compromise between minimizing the necessary duplication of information and the efficiency of the process of sampling and updating data. In fact, the provision of such a solution takes place only when the system analysis of the entire complex of tasks to be automated is already at the stage of describing the system. In this case, we mean its goals and functions, the composition and specificity of information flows, the information composition of tasks, and even individual program modules. The basis of the systems approach is the provisions of the general theory of systems. It is most effective in solving complex problems of analysis and synthesis that require the simultaneous use of several scientific disciplines.

Another important factor that necessitates a systematic approach (starting from the stage of formulating the requirements and setting tasks) is that this stage accounts for up to 80% of all costs for the development of application software. However, it is of particular importance in ensuring that development results meet the needs of end users.

The emergence of the need for a systematic approach to the development of software tools for solving problems in the automation of organizational and economic management systems has led to the need for differentiation of specialist developers. This fact served as a manifestation in the selection of system analysts, system engineers, applied and system programmers in their composition.

The system analyst formulates the general formal requirements for the system software. The duties of a systems engineer are to transform general formal requirements into detailed specifications for individual programs, participate in the development of the logical structure of the database.

The application programmer's responsibility is to refine the specification into the logical structure of the program modules and then into the program code.

The system programmer must ensure the interaction of program modules with the software environment within which application programs are to work.

Another feature of the system development of application programs is their focus on the use of integrated and distributed databases. In this case, DBMS language tools began to be used as tools for developing software components along with programming languages.

Appear and are widely used in the field of PC management and better software and tools that are focused on management professionals - non-programmers. This fact has radically changed the nature of the technology for preparing and solving economic problems.

With the growth in the production of new microprocessors, the priorities and relevance of the problems that are inherent in traditional technologies for developing application programs have changed dramatically. The possibility of excluding professional programmers from the technological chain makes it possible to speed up the process of developing applied software.

8.12. Modern methods and tools for developing applied software

The concept of "modular design" is closely related to the implementation of the top-down design method. A sequence of logically interconnected fragments, designed as a separate part of the program, is called a module. The following properties of software modules are distinguished:

▪ a module can be referenced by name, including from other modules;

▪ upon completion of work, the module must return control to the module that called it;

▪ the module must have one input and output;

▪ The module must be small in size, ensuring its visibility.

When developing complex programs, the main control module and its subordinate modules are separated, which provide the implementation of individual control functions, functional processing, and auxiliary modules that guarantee the service package.

The modular principle of software development has a number of advantages:

1) a capacious program can be developed simultaneously by several performers, which reduces the time for its development;

2) it is possible to create a library of the most used programs and use them;

3) if segmentation is necessary, the procedure for loading large programs into the OP becomes much simpler;

4) there are many natural control points designed to monitor the progress of the development of programs and control the execution of programs;

5) effective testing of programs is provided, designing and subsequent debugging are much easier.

Structured programming is used to facilitate the process of developing and debugging software modules, as well as the process of their subsequent maintenance and modification.

The development of software and tools for programming economic problems is based on programming automation systems, or programming systems that provide the ability to solve many problems directly in the OS computer environment.

The tasks of economic management have a number of features that distinguish them from other types of tasks:

1) the dominance of tasks with relatively simple computational algorithms and the need to form cumulative results;

2) work with large arrays of initial information;

3) the requirement to provide most of the resulting information in the form of tabular documents.

CASE technology is a set of tools for system analysis, design, development and maintenance of complex software systems and allows developers to use extensive opportunities for various kinds of modeling. Consistency of interaction of all specialists involved in software development guarantees centralized storage of all information necessary for designing and control over data integrity.

The ISDOS project consists of modules providing:

▪ input, control and coding of specifications of the designed system;

▪ analysis of the correctness of setting tasks and their consistency;

▪ identifying errors and issuing messages to users, as well as eliminating duplication in source information;

▪ transformation of problem statements after checking the source data into machine programs;

▪ identification of the main elements of the information system.

The listed modules are interacting with each other. However, their division is rather conditional.

Topic 9. Basics of algorithmization and programming

9.1. The concept of an algorithm

An algorithm is a strictly defined and understandable instruction to the performer to perform a sequence of actions aimed at solving the task.

The term "algorithm" comes from the Latin form of the name of the Central Asian mathematician Al-Khwarizmi - Algorithmi. Algorithm is one of the basic concepts of computer science and mathematics.

The executor of the algorithm is some abstract or real (technical, biological or biotechnical) system that is capable of performing the actions prescribed by the algorithm.

To characterize the performer, several concepts are used:

▪ Wednesday;

▪ command system;

▪ basic actions;

▪ refusals.

The environment (or environment) is the "habitat" of the performer.

Any of the executors can execute commands only from some strictly defined list, which is the executor's command system. Applicability conditions are set for each command (in what environment states the command can be executed) and the results of command execution are given.

After calling the command, the executor performs the corresponding elementary action.

An executor may also fail if the command is called when the environment state is invalid for it. Most often, the performer knows nothing about the purpose of the algorithm. He performs all the actions proposed to him, without asking questions "why" and "what for".

In computer science, the universal executor of algorithms is the computer.

The main properties of the algorithms are:

1) understandability for the performer - the performer of the algorithm must know how to execute it;

2) discreteness (discontinuity, separation) - the algorithm should represent the process of solving the problem as a sequential execution of simple (or previously defined) steps (stages);

3) certainty - each rule of the algorithm must be clear, unambiguous and leave no room for arbitrariness. This property ensures the execution of the algorithm mechanically, without requiring any additional instructions or information about the problem being solved;

4) effectiveness (or finiteness) - the algorithm should lead to the solution of the problem in a finite number of steps;

5) mass character - the algorithm for solving the problem is produced in a general form, i.e. it can be applied to a certain class of problems that differ only in the initial data. In this case, the initial data can be selected from a certain area, which is called the area of ​​applicability of the algorithm.

In practice, the following forms of representation of algorithms are most often encountered:

▪ verbal - written in natural language;

▪ graphic - using an image of graphic symbols;

▪ pseudocodes - semi-formalized descriptions of algorithms in some conditional algorithmic language, which include both elements of a programming language and natural language phrases, generally accepted mathematical notations, etc.;

▪ software - texts in programming languages.

The verbal way of writing algorithms is a description of the successive stages of data processing. The algorithm can be given in an arbitrary presentation in natural language. For example, the algorithm for finding the greatest common divisor of two natural numbers can be represented as the following sequence of actions:

1) setting two numbers;

2) if the numbers are equal, then the choice of any of them as an answer and stop, otherwise - the continuation of the algorithm;

3) determining the largest of the numbers;

4) replacement of the larger of the numbers by the difference between the larger and smaller of the numbers;

5) repetition of the algorithm from step 2.

The above algorithm is used for any natural numbers and should lead to the solution of the problem.

The verbal method is not widely used, as it has some disadvantages:

▪ these descriptions are not strictly formalized;

▪ are distinguished by the verbosity of entries;

▪ allow for ambiguity in the interpretation of individual regulations.

The graphic way of presenting algorithms is more compact and visual than the verbal way. With this type of representation, the algorithm is depicted as a sequence of interconnected functional blocks, each of which corresponds to the execution of a certain number of actions.

For graphical representation, the algorithm uses an image in the form of a sequence of interconnected functional blocks, each of which corresponds to the execution of one or more actions. This graphical representation is called a flowchart or flowchart.

In the flowchart, each of the types of actions (input of initial data, calculation of expression values, checking conditions, controlling the repetition of actions, finishing processing, etc.) corresponds to a geometric figure represented as a block symbol. Block symbols are connected by transition lines, which determine the order in which actions are performed.

Pseudocode is a system of notation and rules that is designed to uniformly write algorithms. It occupies an intermediate position between natural and formal languages. On the one hand, pseudocode is similar to ordinary natural language, so algorithms can be written and read in it like plain text. On the other hand, some formal constructions and mathematical symbols are used in pseudocode, due to which the notation of the algorithm approaches the generally accepted mathematical notation.

Pseudocode does not use strict syntactic rules for writing commands that are inherent in formal languages, which makes it easier to write an algorithm at the design stage and makes it possible to use a wider set of commands designed for an abstract executor. However, pseudocode most often contains some constructs inherent in formal languages, which facilitates the transition from writing in pseudocode to writing an algorithm in a formal language. For example, in pseudocode, as well as in formal languages, there are function words, the meaning of which is determined once and for all. They are highlighted in bold in printed text and underlined in handwritten text. There is no single or formal approach to defining pseudocode; therefore, various pseudocodes are used, differing in the set of function words and basic (basic) constructions.

The software form of the representation of algorithms is sometimes characterized by some structures consisting of separate basic (basic) elements. With this approach to algorithms, the study of the basic principles of their design should begin with these basic elements. Their description is carried out using the language of algorithm schemes and the algorithmic language.

9.2. Programming systems

Machine-oriented languages ​​refer to machine-dependent programming languages. The main constructive means of such languages ​​make it possible to take into account the peculiarities of the architecture and principles of operation of a certain computer, that is, they have the same capabilities and requirements for programmers as machine languages. However, unlike the latter, they require prior translation into machine language of the programs compiled with their help.

These types of programming languages ​​can be: autocodes, symbolic coding languages ​​and assemblers.

Machine-independent languages ​​do not require full knowledge of the specifics of computers. With their help, you can write the program in a form that allows its implementation on a computer with various types of machine operations, the binding to which is assigned to the appropriate translator.

The reason for the rapid development and use of high-level programming languages ​​is the rapid growth of computer performance and the chronic shortage of programmers.

An intermediate place between machine-independent and machine-dependent languages ​​is given to the C language. It was created in an attempt to combine the advantages inherent in the languages ​​of both classes. This language has a number of features:

▪ makes maximum use of the capabilities of a specific computing architecture; because of this, C programs are compact and work efficiently;

▪ allows you to make best use of the enormous expressive power of modern high-level languages.

Languages ​​are divided into procedural-oriented and problem-oriented.

Procedurally oriented languages, such as Fortran, Cobol, BASIC, Pascal, are most often used to describe algorithms for solving a wide class of problems.

Domain-oriented languages, in particular RPG, Lisp, APL, GPSS, are used to describe information processing processes in a narrower, specific area.

Object-oriented programming languages ​​allow you to develop software applications for a wide range of diverse tasks that have commonality in the implemented components.

Consider the methods of using programming languages.

Interpretation is operator-by-operator translation and subsequent execution of the translated operator of the source program. There are two main disadvantages of the interpretation method:

1) the interpreting program must be located in the computer memory throughout the entire process of executing the original program. In other words, it must occupy some fixed amount of memory;

2) the process of translation of the same statement is repeated as many times as this command must execute in the program. This leads to a sharp decrease in the performance of the program.

Interpreter translators are quite common because they support dialog mode.

The processes of translation and execution during compilation are separated in time: first, the source program is fully translated into machine language, after which the translated program can be repeatedly executed. For translation by the compilation method, repeated "viewing" of the program being translated is necessary, i.e. compiler compilers are multi-pass. Compilation translation is called an object module, which is the equivalent program in machine code. Before execution, the object module must be processed by a special OS program and converted into a load module.

Translators are also used as interpreters-compilers, which combine the advantages of both translation principles.

9.3. Classification of high-level programming languages

High-level languages ​​are used in machine-independent programming systems. Such programming systems, in comparison with machine-oriented systems, appear to be easier to use.

High-level programming languages ​​are divided into procedural, domain-oriented, and object-oriented.

Procedure-oriented languages ​​are used to write procedures or information processing algorithms for each specific range of tasks. These include:

a) the language Fortran (Fortran), whose name comes from the words Formulae Translation - "formula conversion". Fortran is one of the oldest high-level programming languages. The duration of its existence and use can be explained by the simplicity of the structure of this language;

b) the Basic language, which stands for Beginner's All-purpose Symbolic Instruction Code, which means "multipurpose symbolic instruction code for beginners", was developed in 1964 as a language for teaching programming;

c) the C language (C), used since the 1970s. as a system programming language specifically for writing the UNIX operating system. In the 1980s on the basis of the C language, the C++ language was developed, practically including the C language and supplemented with object-oriented programming tools;

d) the Pascal language, which is named after the French scientist B. Pascal, began to be used from 1968-1971. N. Wirth. At its inception, Pascal was used for teaching programming, but over time it has become widely used for developing software tools in professional programming.

Domain-oriented languages ​​are used to solve whole classes of new problems that have arisen in connection with the constant expansion of the field of application of computer technology:

a) the Lisp language (Lisp - List Information Symbol Processing), which was invented in 1962 by J. McCarthy. Initially, it was used as a tool for working with character strings. Lisp is used in expert systems, analytical computing systems, etc.;

b) the Prolog language (Prolog - Programming in Logic), used for logical programming in artificial intelligence systems.

Object-oriented languages ​​are developing and at the moment. Most of these languages ​​are versions of procedural and problematic languages, but programming with the languages ​​of this group is more visual and easier. The most commonly used languages ​​are:

a) Visual Basic (~ Basic);

b) Delphi (~Pascal);

c) Visual Fortran (~ Fortran);

r) C++ (~C);

e) Prolog++ (~ Prolog).

9.4. VBA system

The VBA system is a subset of VB and includes the VB application builder, its data structures, and control structures that enable you to create custom data types. Like VB, VBA is an event-driven visual programming system. It has the ability to create forms with a standard set of controls and write procedures that handle events that occur during certain actions of the system and the end user. It also allows you to use ActiveX controls and automation. The VBA system is a complete programming system, but it does not have the full range of features that the latest version of VB has.

Programming in the VBA environment has a number of features. In particular, you cannot create a project in it independently of these applications.

Because VBA is a visual system, the programmer is able to create the visible part of the application, which is the basis of the program-user interface. Through this interface, the user interacts with the program. Based on the principles of the object-oriented approach, which is implemented in VBA in relation to applications running under Windows, a programming interface is being developed.

A characteristic of these applications is that there are many objects on the screen at any time (windows, buttons, menus, text and dialog boxes, scroll bars). Given the program algorithm, the user has a certain freedom of choice regarding the use of these objects, i.e., he can click on a button, move an object, enter data into a window, etc. When creating a program, the programmer should not limit the actions of the user, he must develop a program that correctly responds to any user action, even incorrect.

For any object, a number of possible events are defined. Some events are triggered by user actions, such as a single or double mouse click, dragging an object, pressing a keyboard key, and so on. Some events occur as a result of other events: a window opens or closes, a control becomes active or becomes inactive.

Any of the events manifests itself in certain actions of the program, and the types of possible actions can be divided into two groups. The actions of the first group are the result of object properties that are set from some standard list of properties that are set by the VBA programming system and the Windows system itself, for example, minimizing a window after clicking the Minimize button. The second group of actions on events can be defined only by the programmer. For any possible event, the response is provided by creating a VBA procedure. Theoretically, it is possible to create a procedure for each event, but in practice, the programmer fills in the procedure code only for the events of interest in the given program.

VBA objects are functional, that is, they act in a certain way and are able to respond to specific situations. The appearance of an object and its behavior affect its properties, and the methods of an object determine the functions that the object is capable of performing.

Member properties are properties that define nested objects.

Objects are capable of responding to events - initiated by the user and generated by the system. Events initiated by the user appear, for example, when a key is pressed, when a mouse button is clicked. Based on this, any user action can lead to a whole set of events. Events generated by the system appear automatically in the case provided by the computer software.

9.5. VBA programming language

The VBA programming language is designed for writing program code. It has its own alphabet, which includes:

▪ lowercase and uppercase letters of the Latin alphabet (A, B....,Z,a,b....,z);

▪ lowercase and uppercase letters of the Cyrillic alphabet (А-Я, а-я);

▪ non-displayable characters used to separate lexemes (lexical units) from each other;

▪ special characters involved in constructing language constructs: +-*?^=><[]():{}' &©;

▪ numbers from 0 to 9;

▪ underscore "_";

▪ composite characters perceived as one character: =, <>.

A token is a unit of program text that has a specific meaning to the compiler and cannot be further broken down.

VBA program code is a sequence of tokens written in accordance with accepted syntactic rules that implements the desired semantic construction.

An identifier is a sequence of letters, numbers, and underscores.

The VBA system defines some restrictions that are placed on names:

1) the name should start with a letter;

2) the name must not include dots, spaces, separating characters, operation signs, special characters;

3) the name must be unique and not the same as VBA reserved words or other names;

4) name length should not exceed 255 characters;

5) when composing names, it is necessary to follow style conventions;

6) the identifier must clearly reflect the purpose of the variable for understanding the program;

7) it is better to use lowercase letters in names; if the names include several names, they must be separated from each other by underlining or a new word must be started with a capital letter;

8) names of constants should be composed of capital letters;

9) the name of an identifier must begin with a special character indicating the type of data associated with this identifier.

Variables are objects that are designed to store data. Before using variables in a program, they must be declared (declared). The correct choice of variable type ensures efficient use of computer memory.

String variables can be of variable or fixed length.

Objects whose values ​​do not change and cannot be changed during program execution are called constants. They are divided into named and unnamed.

Enums are used to declare a group of constants under a common name, and they can only be declared in the global declaration section of a module or form.

Variables are divided into two types - simple and structural variables. Arrays are one-dimensional and multidimensional.

After the declaration, the value of the variable can be arbitrary. An assignment operator is used to assign a value to a variable.

Mathematical operations are used to write a formula, which is a program statement that contains numbers, variables, operators, and keywords.

Relational operations can result in a value, and there are only two resulting values: true and false.

Logical operations are used in logical expressions, this happens when there are several selection conditions in relational operations.

String operations are concatenation operations that combine the values ​​of two or more string variables or string constants. The result of such an operation is a longer string composed of the original strings.

Topic 10. Fundamentals of information security

10.1. Information protection as a regularity in the development of computer systems

Information protection is the use of various means and methods, the use of measures and the implementation of measures in order to ensure the system of reliability of transmitted, stored and processed information.

Information security includes:

▪ ensuring the physical integrity of information, eliminating distortion or destruction of information elements;

▪ preventing the substitution of information elements while maintaining its integrity;

▪ denying unauthorized access to information to persons or processes that do not have the appropriate authority to do so;

▪ gaining confidence that the information resources transferred by the owner will be used only in accordance with the terms agreed upon by the parties.

The processes of violating the reliability of information are divided into accidental and malicious (intentional). The sources of random destructive processes are unintentional, erroneous actions of people, technical failures. Malicious violations appear as a result of deliberate actions of people.

The problem of information security in electronic data processing systems arose almost simultaneously with their creation. It was caused by specific facts of malicious actions with information.

The importance of the problem of providing information reliability is confirmed by the cost of protective measures. Significant material and financial costs are required to provide a reliable protection system. Before building a protection system, an optimization model should be developed that allows achieving the maximum result with a given or minimum expenditure of resources. The calculation of the costs that are necessary to provide the required level of information security should begin with the clarification of several facts: a complete list of threats to information, the potential danger to information of each of the threats, the amount of costs required to neutralize each of the threats.

If in the first decades of active use of a PC, the main danger was posed by hackers who connected to computers mainly through the telephone network, then in the last decade, the violation of information reliability has been progressing through programs, computer viruses, and the global Internet.

There are many ways of unauthorized access to information, including:

▪ viewing;

▪ copying and substitution of data;

▪ input of false programs and messages as a result of connecting to communication channels;

▪ reading the remaining information on its media;

▪ reception of electromagnetic radiation and wave signals;

▪ use of special programs.

To combat all these methods of unauthorized access, it is necessary to develop, create and implement a multi-stage continuous and managed information security architecture. It is not only confidential information that should be protected. The object of protection is usually affected by a certain combination of destabilizing factors. At the same time, the type and level of influence of some factors may not depend on the type and level of others.

A situation is possible when the type and level of interaction of existing factors significantly depend on the influence of others, which explicitly or implicitly enhance such impacts. In this case, it is necessary to apply both means that are independent from the point of view of the effectiveness of protection, and interdependent. In order to provide a sufficiently high level of data security, a compromise must be found between the cost of protective measures, the inconvenience of using protective measures, and the importance of the information being protected. Based on a detailed analysis of numerous interacting factors, a reasonable and effective decision can be made about the balance of protection measures against specific sources of danger.

10.2. Objects and elements of protection in computer data processing systems

A protected object is a system component that contains protected information. A security element is a set of data that may contain information necessary for protection.

During the operation of computer systems, the following may occur:

▪ equipment failures and malfunctions;

▪ system and system technical errors;

▪ software errors;

▪ human errors when working with a computer.

Unauthorized access to information is possible during maintenance of computers in the process of reading information on machine and other media. Illegal familiarization with information is divided into passive and active. With passive acquaintance with information, there is no violation of information resources and the offender can only disclose the content of messages. In the case of active unauthorized access to information, it is possible to selectively change, destroy the order of messages, redirect messages, delay and create fake messages.

To ensure security, various activities are carried out, which are united by the concept of "information security system".

An information security system is a set of organizational (administrative) and technological measures, software and hardware, legal, moral and ethical standards that are used to prevent the threat of violators in order to minimize possible damage to users and owners of the system.

Organizational and administrative means of protection is the regulation of access to information and computing resources, as well as the functional processes of data processing systems. These protections are used to hinder or eliminate the possibility of implementing security threats. The most typical organizational and administrative means are:

▪ admission to the processing and transmission of protected information only to verified officials;

▪ storage of information media that represent a certain secret, as well as registration logs in safes inaccessible to unauthorized persons;

▪ accounting for the use and destruction of documents (media) with protected information;

▪ dividing access to information and computing resources for officials in accordance with their functional responsibilities.

Technical means of protection are used to create some physically closed environment around the object and protection elements. It uses activities such as:

▪ limitation of electromagnetic radiation through shielding of rooms in which information processing is carried out;

▪ implementation of power supply to equipment that processes valuable information from an autonomous power source or a general electrical network through special network filters.

Software tools and methods of protection are more active than others used to protect information in PCs and computer networks. They implement such protection functions as differentiation and control of access to resources; registration and study of ongoing processes; prevention of possible destructive impacts on resources; cryptographic protection of information.

Technological means of information protection are understood as a number of activities that are organically built into the technological processes of data conversion. They also include:

▪ creating archival copies of media;

▪ manual or automatic saving of processed files in external computer memory;

▪ automatic registration of user access to various resources;

▪ development of special instructions for performing all technological procedures, etc.

Legal and moral and ethical measures and means of protection include the laws in force in the country, regulations governing the rules, norms of behavior, the observance of which contributes to the protection of information.

10.3. Means of identification and differentiation of access to information

Identification is the assignment of a unique name or image to an object or subject. Authentication is the establishment of the identity of an object or subject, i.e. checking whether the object (subject) is who he claims to be.

The ultimate goal of the procedures for identifying and authenticating an object (subject) is to admit it to information of limited use in the event of a positive check or deny admission in case of a negative result of the check.

The objects of identification and authentication include: people (users, operators); technical means (monitors, workstations, subscriber points); documents (manual, printouts); magnetic storage media; information on the monitor screen.

The most common authentication methods include assigning a password to a person or other name and storing its value in a computer system. A password is a set of characters that defines an object (subject).

The password as a security tool can be used to identify and authenticate the terminal from which the user logs in, as well as to authenticate the computer back to the user.

Given the importance of a password as a means of increasing the security of information from unauthorized use, the following precautions must be observed:

1) do not store passwords in a computer system in an unencrypted place;

2) do not print or display passwords in clear text on the user's terminal;

3) do not use your name or the names of relatives, as well as personal information (date of birth, home or office phone number, street name) as a password;

4) do not use real words from an encyclopedia or an explanatory dictionary;

5) use long passwords;

6) use a mixture of upper and lower case keyboard characters;

7) use combinations of two simple words connected by special characters (for example, +,=,<);

8) use non-existent new words (absurd or even delusional content);

9) change the password as often as possible.

To identify users, systems that are complex in terms of technical implementation can be used, which provide user authentication based on the analysis of his individual parameters: fingerprints, hand line pattern, iris, voice timbre. The most widely used are physical identification methods that use carriers of password codes. Such carriers can be a pass in access control systems; plastic cards with the name of the owner, his code, signature; plastic cards with a magnetic strip, which is read by a special reader; plastic cards containing an embedded microchip; optical memory cards.

One of the most intensively developed areas for ensuring information security is the identification and authentication of documents based on electronic digital signature. When transmitting information via communication channels, facsimile equipment is used, but in this case, the recipient does not receive the original, but only a copy of the document with a copy of the signature, which during transmission can be re-copied to use a false document.

An electronic digital signature is a method of encryption using cryptographic transformation and is a password that depends on the sender, recipient and content of the transmitted message. To prevent reuse of the signature, it must be changed from message to message.

10.4. Cryptographic method of information protection

The most effective means of improving security is cryptographic transformation. To improve security, do one of the following:

1) data transmission in computer networks;

2) transfer of data that is stored in remote memory devices;

3) the transfer of information in the exchange between remote objects.

The protection of information by the method of cryptographic transformation consists in bringing it to an implicit form through the transformation of the constituent parts of information (letters, numbers, syllables, words) using special algorithms or hardware and key codes. The key is a mutable part of the cryptographic system, kept secret and determining which of the possible encryption transformations is performed in this case.

To change (encryption) some algorithm or a device that implements a given algorithm is used. Algorithms can be known to a wide range of people. The encryption process is controlled by a periodically changing key code, which provides each time the original presentation of information in the case of using the same algorithm or device. With a known key, it is possible to decrypt the text relatively quickly, simply and reliably. Without knowing the key, this procedure can become almost impossible even when using a computer.

The following necessary requirements are imposed on the methods of cryptographic transformation:

1) it must be sufficiently resistant to attempts to reveal the original text using the encrypted one;

2) key exchange should not be hard to remember;

3) the costs of protective transformations should be made acceptable for a given level of information security;

4) errors in encryption should not cause a clear loss of information;

5) the size of the ciphertext must not exceed the size of the original text.

Methods intended for protective transformations are divided into four main groups: permutations, substitutions (substitutions), additive and combined methods.

The methods of permutation and replacement (substitution) are characterized by short keys, and the reliability of protection is determined by the complexity of the transformation algorithms. In contrast, additive methods are characterized by simple algorithms and long keys. Combined methods are more reliable. They most often combine the advantages of the components used.

The four cryptographic transformation methods mentioned are symmetric encryption methods. The same key is used for both encryption and decryption.

The main methods of cryptographic transformation are the permutation and replacement methods. The basis of the permutation method is to break the source text into blocks, and then write these blocks and read the ciphertext along different paths of a geometric figure.

Replacement encryption means that the characters of the source text (block) written in one alphabet are replaced by characters of another alphabet in accordance with the transformation key used.

The combination of these methods has led to the formation of the derivative cipher method, which has strong cryptographic capabilities. The algorithm of the method is implemented both in hardware and software, but is designed to be implemented using special-purpose electronic devices, which allows achieving high performance and simplified organization of information processing. The industrial production of equipment for cryptographic encryption, established in some Western countries, makes it possible to dramatically increase the level of security of commercial information during its storage and electronic exchange in computer systems.

10.5. Computer viruses

A computer virus is a specially written program that can spontaneously attach to other programs (infect them), create copies of itself and inject them into files, system areas of a computer and other computers connected with it in order to disrupt the normal operation of programs, damage files and directories, and as well as creating various interference when working on a computer.

The appearance of viruses in a computer is determined by the following observable signs:

▪ decrease in computer performance;

▪ impossibility and slowdown of loading the OS;

▪ increasing the number of files on disk;

▪ replacing file sizes;

▪ periodic appearance of inappropriate messages on the monitor screen;

▪ reducing the volume of free OP;

▪ a sharp increase in hard disk access time;

▪ destruction of the file structure;

▪ the disk drive warning light comes on when it is not being accessed.

Removable disks (floppy disks and CD-ROMs) and computer networks are usually the main ways to infect computers with viruses. Infection of the computer's hard disk can occur if the computer is booted from a floppy disk containing a virus.

Based on the type of habitat viruses have, they are classified into boot, file, system, network and file-boot (multifunctional).

Boot viruses infect the boot sector of a disk or the sector that contains the boot program of the system disk.

File viruses are located mainly in .COM and .EXE executable files.

System viruses infect system modules and peripheral device drivers, file allocation tables, and partition tables.

Network viruses reside in computer networks, while file-boot viruses infect disk boot sectors and application program files.

Viruses are divided into resident and non-resident viruses along the way of infecting the habitat.

Resident viruses, when infecting a computer, leave their resident part in the operating system, which, after infection, intercepts the OS's calls to other objects of infection, infiltrates them and performs its destructive actions, which can lead to shutdown or reboot of the computer. Non-resident viruses do not infect the computer operating system and are active for a limited time.

The peculiarity of the construction of viruses affects their manifestation and functioning.

A logic bomb is a program that is built into a large software package. It is harmless until a certain event occurs, after which its logical mechanism is implemented.

Mutant programs, self-reproducing, create copies that are clearly different from the original.

Invisible viruses, or stealth viruses, intercept OS calls to affected files and disk sectors and substitute uninfected objects instead. When accessing files, these viruses use rather original algorithms that allow them to "deceive" resident anti-virus monitors.

Macro viruses use the capabilities of macro languages ​​that are built into office data processing programs (text editors, spreadsheets).

By the degree of impact on the resources of computer systems and networks, or by destructive capabilities, harmless, non-dangerous, dangerous and destructive viruses are distinguished.

Harmless viruses do not have a pathological effect on the computer. Mild viruses do not destroy files, but reduce free disk space and display graphical effects. Dangerous viruses often cause significant disruption to your computer. Destructive viruses can lead to the erasure of information, complete or partial disruption of the operation of application programs. It is important to keep in mind that any file capable of loading and executing program code is a potential place for a virus to be placed.

10.6. Antivirus programs

The widespread use of computer viruses has led to the development of anti-virus programs that allow you to detect and destroy viruses and "treat" affected resources.

The basis of most anti-virus programs is the principle of searching for virus signatures. A virus signature is some unique characteristic of a virus program that indicates the presence of a virus in a computer system. Most often, anti-virus programs include a periodically updated database of virus signatures. An antivirus program examines and analyzes a computer system and makes comparisons to match signatures in a database. If the program finds a match, it tries to clean up the detected virus.

According to the way they work, anti-virus programs can be divided into filters, auditors, doctors, detectors, vaccines, etc.

Filter programs are "watchmen" that are constantly in the OP. They are resident and intercept all requests to the OS to perform suspicious actions, i.e. operations that use viruses to reproduce and damage information and software resources in the computer, including reformatting the hard drive. Among them are attempts to change file attributes, correct executable COM or EXE files, write to disk boot sectors.

Each time such an action is requested, a message appears on the computer screen stating what action is requested and which program will perform it. In this case, the user must either allow or deny its execution. The constant presence of "watchdog" programs in the OP significantly reduces its volume, which is the main disadvantage of these programs. In addition, filter programs are not able to "treat" files or disks. This function is performed by other antivirus programs, such as AVP, Norton Antivirus for Windows, Thunder Byte Professional, McAfee Virus Scan.

Auditor programs are a reliable means of protecting against viruses. They remember the initial state of programs, directories and system areas of the disk, provided that the computer has not yet been infected with a virus. Subsequently, the program periodically compares the current state with the original. If inconsistencies are found (by file length, modification date, file cycle control code), a message about this appears on the computer screen. Among the auditor programs, one can single out the Adinf program and its addition in the form of the Adinf cure Module.

The doctor program is capable of not only detecting, but also “cleaning” infected programs or disks. At the same time, it destroys the infected programs of the virus body. Programs of this type can be divided into phages and polyphages. Phages are programs that are used to search for viruses of a certain type. Polyphages are designed to detect and destroy a wide variety of viruses. In Russia, the most commonly used polyphages are MS Antivirus, Aidstest, Doctor Web. They are continuously updated to combat emerging new viruses.

Detector programs are able to detect files infected with one or more viruses known to the program developers.

Vaccine programs, or immunizers, belong to the class of resident programs. They modify programs and disks in a way that does not affect their operation. However, the virus that is being vaccinated against considers them already infected and does not infect them. At the moment, many anti-virus programs have been developed that have received wide recognition and are constantly updated with new tools to combat viruses.

The Doctor Web polyphage program is used to combat polymorphic viruses that have appeared relatively recently. In heuristic analysis mode, this program effectively detects files infected with new, unknown viruses. Using Doctor Web to control floppy disks and files received over the network, you can almost certainly avoid system infection.

When using the Windows NT operating system, there are problems with protection against viruses designed specifically for this environment. A new type of infection has also appeared - macro viruses that are "implanted" in documents prepared by the Word word processor and Excel spreadsheets. The most common antivirus programs include AntiViral Toolkit Pro (AVP32), Norton Antivirus for Windows, Thunder Byte Professional, McAfee Virus Scan. These programs function in the mode of scanner programs and carry out anti-virus control of OP, folders and disks. In addition, they contain algorithms for recognizing new types of viruses and allow you to disinfect files and disks during the scan.

AntiViral Toolkit Pro (AVP32) is a 32-bit application that runs on Windows NT. It has a convenient user interface, a help system, a flexible system of user-selectable settings, and recognizes more than 7 different viruses. This program detects (detects) and removes polymorphic viruses, mutant and stealth viruses, as well as macro viruses that infect a Word document and Excel spreadsheets, Access objects - "Trojan horses".

An important feature of this program is the ability to control all file operations in the background and detect viruses before the system is actually infected, as well as detect viruses inside ZIP, ARJ, ZHA, RAR archives.

The interface of AllMicro Antivirus is simple. It does not require any additional knowledge about the product from the user. When working with this program, you should press the Start (Scan) button, after which it will start checking or scanning the OP, boot and system sectors of the hard disk, and then all files, including archived and packed ones.

Vscan 95 scans the computer's memory, boot sectors of the system drive, and all files in the root directory at boot. The other two programs in the package (McAfee Vshield, Vscan) are Windows applications. The first after loading Windows is used to monitor newly connected drives, control executable programs and copied files, and the second - to additionally check memory, drives and files. McAfee VirusScan can find macro viruses in MS Word files.

In the process of development of local computer networks, e-mail and the Internet and the introduction of the Windows NT network operating system, anti-virus software developers have prepared and put on the market such programs as Mail Checker, which allows you to check incoming and outgoing e-mail, and AntiViral Toolkit Pro for Novell NetWare (AVPN ) used to detect, disinfect, delete and move infected files to a special directory. The AVPN program is used as an anti-virus scanner and filter that constantly monitors the files stored on the server. He is able to remove, move and "heal" affected objects; check packed and archived files; identify unknown viruses using a heuristic mechanism; scan remote servers in scanner mode; disconnect the infected station from the network. The AVPN program is easily configured to scan files of various types and has a convenient scheme for replenishing the anti-virus database.

10.7. Software protection

Software products are important objects of protection for a number of reasons:

1) they are the product of the intellectual labor of highly qualified specialists, or even groups of several tens or even hundreds of people;

2) the design of these products is associated with the consumption of significant material and labor resources and is based on the use of expensive computer equipment and high technologies;

3) to restore broken software, significant labor costs are required, and the use of simple computing equipment is fraught with negative results for organizations or individuals.

Protection of software products has the following goals:

▪ restriction of unauthorized access of certain categories of users to work with them;

▪ exclusion of deliberate damage to programs in order to disrupt the normal course of data processing;

▪ preventing intentional modification of the program for the purpose of damaging the reputation of the software manufacturer;

▪ preventing unauthorized replication (copying) of programs;

▪ exclusion of unauthorized study of the content, structure and mechanism of the program.

Software products should be protected from unauthorized influences of various objects: a person, technical means, specialized programs, the environment. Influence on the software product is possible through the use of theft or physical destruction of the documentation for the program or the machine carrier itself, as well as by disrupting the functionality of the software.

Technical means (hardware) through connection to a computer or transmission medium can read, decrypt programs, as well as their physical destruction.

Virus infection can be performed using specialized programs, virus infection of a software product, its unauthorized copying, unauthorized study of its content.

The environment due to anomalous phenomena (increased electromagnetic radiation, fire, floods) can cause physical destruction of the software product.

The easiest and most affordable way to protect software products is to restrict access to them using:

▪ password protection of programs when they are launched;

▪ key floppy disk;

▪ a special technical device (electronic key) connected to the computer input/output port.

In order to avoid unauthorized copying of programs, special protection software should:

▪ identify the environment from which the program is launched;

▪ keep records of the number of authorized installations or copies performed;

▪ counteract (even to the point of self-destruction) the study of algorithms and programs of the system.

For software products, effective safeguards are:

1) identification of the environment from which the program is launched;

2) entering a record of the number of authorized installations or copies made;

3) counteraction to non-standard formatting of the startup floppy disk;

4) fixing the location of the program on the hard disk;

5) binding to an electronic key inserted into the input-output port;

6) binding to the BIOS number.

When protecting software products, it is necessary to use legal methods. Among them are licensing agreements and contracts, patent protection, copyright, technological and industrial secrecy.

10.8. Securing data on an offline computer

The most common cases that pose a threat to data are accidental data erasure, software failure and hardware failure. One of the first recommendations to the user is to back up the data.

For magnetic disks, there is such a parameter as the mean time between failures. It can be expressed in years, so a backup is needed.

When working on a computer, data is sometimes not read due to the failure of the hard disk control board. By replacing the controller board and restarting the computer, you can resume the interrupted job.

In order to ensure the safety of data, it is necessary to create backup copies. The use of copying as one of the data security methods requires the choice of software product, procedure (full, partial or selective backup) and frequency of backup. Depending on the significance of the information, a double backup is sometimes made. Do not neglect the testing of backups. Data must also be protected when the computer is on a small network, when users use file server shares.

Security methods include:

▪ use of attributes of files and directories such as “hidden”, “read-only”;

▪ storing important data on floppy disks;

▪ placement of data in password-protected archive files;

▪ inclusion of regular scanning for computer viruses in the security program.

There are three main ways to use antivirus programs:

1) search for a virus at boot, when the command to launch an anti-virus program is included in AUTOEXEC.bat;

2) launching a virus program manually;

3) visual preview of each uploaded file.

A pragmatic method for securing information on an offline computer is password protection. After turning on the computer and running the CM08 installer, the user can enter the information twice, which becomes the password. Further protection at the CMOS level locks the entire computer if the correct password is not entered.

In case you don't want to use a password at boot, some keyboard models can be locked using the physical keys that came with your computer.

The ability to protect some files is provided when the user works with office packages (word processors, spreadsheets, DBMS) and executes the command to save files (Save as...). If in this case you click on the button Options (Options), then in the dialog box that opens, you can set a password that limits the ability to work with this document. In order to restore the original form of data protected in this way, the same password must be entered. The user can forget or, having written it down on paper, simply lose the password, then even more trouble can arise than when working without password protection.

There are quite a variety of ways to protect computers that work stand-alone or as part of a small network, at home or in the office. When choosing a strategy for protecting information on a computer, it is necessary to find a compromise between the value of the protected data, the costs of providing protection, and the inconvenience that the protection system imposes on working with data.

10.9. Data Security in an Online Environment

Interactive environments are vulnerable in terms of data security. An example of interactive media is any of the systems with communication capabilities, such as email, computer networks, the Internet.

E-mail is any form of communication used by computers and modems. The most vulnerable places in email are the sender's outbox and the recipient's mailbox. Each of the email software packages allows you to archive incoming and outgoing messages to any other address, which can lead to abuse by intruders.

E-mail, while providing message forwarding, can cause significant harm to the recipient of messages. Other safety measures should be used to prevent undesirable consequences, including:

▪ You cannot immediately launch programs received by email, especially attachments. You need to save the file on disk, scan it with an antivirus program and only then run it;

▪ It is prohibited to disclose your password and personal data, even if the sender offers the recipient something very tempting;

▪ when opening received MS Office files (in Word, Excel), you should, if possible, not use macros;

▪ It is important to try to use proven as well as newer versions of email programs.

One of the important problems for Internet users is the problem of data security in the network itself. The user is connected to the resources through the provider. In order to protect information from hooligan elements, unskilled users and criminals, the Internet system uses a system of rights, or access control. Each data file (or other computer resources) has a set of attributes that say that this file can be viewed by anyone, but only the owner has the right to change it. Another problem is that no one but the owner can view the file, despite the fact that the names of these information resources are visible. Usually the user seeks to protect their information in some way, but it must be remembered that system administrators can overcome protection systems. In this case, various methods of encrypting information using keys developed by the user come to the rescue.

One of the problems of working on the Internet is restricting the access of certain categories of users to information resources (children and schoolchildren). This can be done with the help of special software products - firewalls (Net Nanny, Surf-Watch, Cyber ​​Patrol). They are based on the principle of keyword filtering, fixed lists of WWW service locations that contain inappropriate material for children. Similar programs that record Internet sessions and deny access to certain places on the network can be installed in offices and other establishments to prevent the phenomenon of employees wasting time for personal interests.

Internet - a system in which numerous users have their own Web servers containing advertising or reference information on Web pages. Competitors are able to spoil out the content. To avoid trouble in such situations, you can surf the Web regularly. If information is corrupted, it must be restored using pre-prepared copies of files. It is important to keep in mind that providers who systematically review event logs and update software if security problems are found in it are obliged to ensure the security of information on servers.

Topic 11. Databases

11.1. The concept of a database. Database management systems

The word "data" is defined as a dialectical component of information in the form of registered signals. Data registration can be carried out by any physical method (mechanical movement of physical bodies, change in their shape or surface quality parameters, change in electrical, magnetic, optical characteristics, chemical composition or nature of chemical bonds, change in the state of the electronic system, etc.). Initially, the following data types were used when creating databases:

1) numerical (for example, 17; 0,27; 2E-7);

2) character or alphanumeric (in particular, "ceiling", "table");

3) dates that are specified using the special type "Date" or as ordinary character data (for example, 12.02.2005/12/02, 2005/XNUMX/XNUMX).

Other data types were later defined, including:

1) temporary and date-time, which are used to store information about time and / or date (for example, 5.02.2005/7/27, 04:23.02.2005:16, 00/XNUMX/XNUMX XNUMX:XNUMX);

2) character data of variable length, designed to store textual information of great length;

3) binary, which are used to store graphic objects, audio and video information, spatial, chronological and other special information;

4) hyperlinks that allow you to store links to various resources located outside the database.

A database is a set of interrelated data stored in a computer memory in a certain way to display the structure of objects and their relationships in the subject area under study. It is the main form of organization of data storage in information systems.

A database management system is a set of symbolic and software tools designed to create, maintain and organize shared access to databases for multiple users.

The first DBMS were developed by IBM - IMS (1968) and Software AG-ADABA- (1969). At the moment, there are a large number of different database management systems (more than several thousand), and their number is constantly growing.

Among the main functions of the DBMS (higher-level functions), one can single out the storage, modification and processing of information, as well as the development and receipt of various output documents.

The functions of the DBMS of a lower level include:

1) data management in external memory;

2) OP buffer management;

3) transaction management;

4) keeping a log of changes in the database;

5) ensuring the integrity and security of databases.

11.2. Hierarchical, network and relational data representation models

The information in the database is structured in some way, that is, it can be described by a data representation model (data model) that are supported by the DBMS. These models are divided into hierarchical, network and relational.

When using a hierarchical data representation model, relationships between data can be characterized using an ordered graph (or tree). In programming, when describing the structure of a hierarchical database, the "tree" data type is used.

The main advantages of the hierarchical data model are:

1) efficient use of computer memory;

2) high speed of performing basic operations on data;

3) convenience of working with hierarchically ordered information.

The disadvantages of a hierarchical data representation model include:

1) the cumbersomeness of such a model for processing information with rather complex logical connections;

2) the difficulty in understanding its operation by a common user.

A small number of DBMSs are built on a hierarchical data model.

The network model can be represented as a development and generalization of a hierarchical data model that allows displaying various data relationships in the form of an arbitrary graph.

The advantages of the network data presentation model are:

1) efficiency in the use of computer memory;

2) high speed of performing basic operations on data;

3) huge opportunities (larger than the hierarchical model) for the formation of arbitrary connections.

The disadvantages of the network data presentation model include:

1) high complexity and rigidity of the database schema, which is built on its basis;

2) difficulty for understanding and performing information processing in the database by a non-professional user.

Database management systems built on the basis of the network model are also not widely used in practice.

The relational model of data presentation was developed by an employee of the 1WME company. Codd. His model is based on the concept of "relationship". The simplest example of a relationship is a two-dimensional table.

The advantages of the relational data representation model (compared to the hierarchical and network models) are its clarity, simplicity and convenience in the practical implementation of relational databases on a computer.

The disadvantages of the relational data representation model include:

1) lack of standard means of identifying individual records;

2) the complexity of describing hierarchical and network relationships.

Most of the DBMS used by both professional and non-professional users are built on the basis of a relational data model (Visual FoxPro and Access from Microsoft, Oracle from Oracle, etc.).

11.3. Post-relational, multidimensional and object-oriented data representation models

The post-relational data representation model is an extended version of the relational data model and allows you to eliminate the limitation of the indivisibility of data stored in table records. That is why data storage in a postrelational model is considered more efficient than a relational one.

The advantage of the post-relational model is that it makes it possible to form a set of related relational tables through one post-relational table, which ensures high visibility of information presentation and efficiency of its processing.

The disadvantage of this model lies in the complexity of solving the problem of ensuring the integrity and consistency of the stored data.

Examples of postrelational DBMSs are UniVers, Budda and Dasdb.

In 1993, an article by E. Codd was published, in which he formulated 12 basic requirements for OLAP (On-line Analytical Processing) class systems. The main principles described were related to the possibilities of conceptual representation and processing of multidimensional data. This moment became the starting point for the growth of interest in multidimensional data representation models.

Multidimensional models are highly specialized DBMS that are used for interactive analytical processing of information. Multidimensional data organization is more visual and informative in comparison with the relational model.

The main disadvantage of a multidimensional data model is its cumbersomeness for solving the simplest problems of ordinary online information processing.

Examples of DBMS based on such models are Arbor Software's Ess-base, Oracle's Oracle Express Server, etc.

Object-oriented data representation models allow you to identify individual database records. Certain relationships are formed between database records and their processing functions using mechanisms similar to the corresponding facilities in object-oriented programming languages.

The advantages of an object-oriented data model are:

1) the ability to display information about the complex relationships of objects;

2) the ability to identify a single database record and determine the function of its processing.

The disadvantages of the object-oriented data model include:

1) difficulty in understanding its activities by a non-professional user;

2) inconvenience of data processing;

3) low speed of query execution.

Among the object-oriented DBMS, we can distinguish the ROET systems from ROET Software, Versant from Versant Technologies, etc.

11.4. Classifications of database management systems

Any software product capable of supporting the processes of designing, administering and using a database can fall under the definition of a DBMS, so a classification of DBMS by types of programs was developed:

1) full-featured - the most numerous and powerful programs in terms of their capabilities, such as Microsoft Access, Microsoft FoxPro, Clarion Database Developer, etc .;

2) database servers - are used to organize data processing centers in computer networks. Among them are Microsoft SQL Server, NetWare SQL by Novell;

3) database clients - various programs (full-functional DBMS, spreadsheets, word processors, etc.) that provide greater performance of the computer network if the client and server parts of the database are produced by the same company, but this condition is not mandatory;

4) tools for developing programs for working with databases - designed to develop such software products as client programs, database servers and their individual applications, as well as user applications. Programming systems, program libraries for various programming languages, and development automation packages serve as tools for developing custom applications. The most commonly used custom application development tools are Borland's Delphi and Microsoft's Visual Basic.

By type of application, DBMS are divided into personal and multi-user.

Personal DBMS (for example, Visual FoxPro, Paradox, Access) are used in the design of personal databases and low-cost applications that work with them, which, in turn, can be used as a client part of a multi-user DBMS.

Multiuser DBMS (for example, Oracle and Informix) consist of a database server and a client part and are able to work with various types of computers and operating systems of various manufacturers.

Most often, information systems are built on the basis of a client-server architecture, which includes a computer network and a distributed database. A computer network is used to organize scientific work on a PC and in networks. A distributed database consists of a multi-user database located on a server computer and a personal database located on workstations. The database server does the bulk of the data processing.

11.5. Database access languages

There are two types of database access languages:

1) data description language - a high-level language designed to describe the logical structure of data;

2) data manipulation language - a set of structures that ensure the implementation of basic operations for working with data: input, modification and selection of data by request.

The most common access languages ​​are the two standardized languages:

1) QBE (Query by Example) - a sample query language characterized by the properties of a data manipulation language;

2) SQL (Structured Query Language) - a structured query language, consisting of the properties of languages ​​of both types.

The QBE language was developed on the basis of relational calculus with domain variables. It helps to form complex queries to the database by filling in the request form offered by the database management system. Any relational DBMS has its own version of the QBE language. The advantages of this method of setting queries to the database are:

1) high visibility;

2) no need to specify the algorithm for performing the operation.

The Structured Query Language (SQL) is based on relational calculus with variable tuples. Several standards for this language have been developed, the most famous of which are SQL-89 and SQL-92. The SQL language is used to perform operations on tables and the data contained in these tables, and some related operations. It is not used as a separate language and is most often part of the built-in programming language of the DBMS (for example, FoxPro DBMS Visual FoxPro, ObjectPAL DBMS Paradox, Visual Basic for Applications DBMS Access).

The SQL language is focused only on data access, therefore it is classified as a program development tool and is called built-in. There are two main methods for using embedded SQL:

1) static - characterized by the fact that the program text contains calls to SQL language functions that are rigidly included in the executable module after compilation. Changes in called functions can be made at the level of individual call parameters using programming language variables;

2) dynamic - differs in the dynamic construction of SQL function calls and the interpretation of these calls during program execution. It is most often used in cases where the type of SQL call in the application is not known in advance, and it is built in a dialogue with the user.

11.6. Databases on the Internet

The basis for publishing databases on the World Wide Web is the simple arrangement of information from databases on Web pages of the network.

The publication of databases on the Internet is designed to solve a number of problems, among which are the following:

1) organizing the interconnection of database management systems that operate on different platforms;

2) building information systems on the Internet based on a multi-level database architecture;

3) building local Intranet networks using technologies for publishing databases on the Internet;

4) application in the Internet of information from available local network databases;

5) use of databases to organize information presented on the Internet;

6) using a Web browser as an accessible client program for accessing databases on the Internet.

To publish databases on Web pages, two main methods are used to generate Web pages containing information from databases:

1) static publication - Web pages are created and stored on a Web server until a user request is received to receive them (in the form of files on a hard drive in the format of a Web document). This method is used when publishing information that is rarely updated in the database. The main advantages of such an organization for publishing databases on the Internet are accelerated access to Web documents that contain information from databases, and reducing the load on the server when processing client requests;

2) dynamic publication - Web pages are created when a user request arrives at the server. The server sends a request to generate such pages to a program - an extension of the server that generates the required document. The server then sends the completed Web pages back to the browser. This method of generating Web pages is used when the contents of the database are updated frequently, such as in real time. This method publishes information from databases for online stores and information systems. Dynamic pages are formed using various tools and technologies, such as ASP (Active Server Page - active server page), PHP (Personal Home Page tools - personal home page tools).

Among the software tools that allow you to get information from the Internet, Web applications (Internet applications) stand out, which are a set of Web pages, scripts and other software tools located on one or more computers and designed to perform an applied task. Applications that publish databases on the Internet are classified as a separate class of Web applications.

Literature

1. Informatics: Basic course: a textbook for students / ed. S. V. Simonovich. St. Petersburg: Peter, 2002.

2. Levin A. Sh. Self-instruction manual for working on a computer / A. Sh. Levin. 8th ed. St. Petersburg: Peter, 2004.

3. Leontiev V.P. The latest encyclopedia of a personal computer 2003 / V.P. Leontiev. M.: OLMA-Press, 2003.

4. Mogilev A. V. Informatics: textbook. allowance for students / A. V. Mogilev, N. I. Pak, E. K. Khenner; ed. E. K. Henner. M.: Academy, 2001.

5. Murakhovsky V. I. Hardware of a personal computer: A practical guide / V. I. Murakhovsky, G. A. Evseev. M.: DESS COM, 2001.

6. Olifer VG Computer networks. Principles, technologies, protocols: a textbook for students / V. G. Olifer, N. A. Olifer. St. Petersburg: Peter, 2001.

Author: Kozlova I.S.

We recommend interesting articles Section Lecture notes, cheat sheets:

Geography. Crib

National history. Lecture notes

Internal illnesses. Lecture notes

See other articles Section Lecture notes, cheat sheets.

Read and write useful comments on this article.

<< Back

Latest news of science and technology, new electronics:

Artificial leather for touch emulation 15.04.2024

In a modern technology world where distance is becoming increasingly commonplace, maintaining connection and a sense of closeness is important. Recent developments in artificial skin by German scientists from Saarland University represent a new era in virtual interactions. German researchers from Saarland University have developed ultra-thin films that can transmit the sensation of touch over a distance. This cutting-edge technology provides new opportunities for virtual communication, especially for those who find themselves far from their loved ones. The ultra-thin films developed by the researchers, just 50 micrometers thick, can be integrated into textiles and worn like a second skin. These films act as sensors that recognize tactile signals from mom or dad, and as actuators that transmit these movements to the baby. Parents' touch to the fabric activates sensors that react to pressure and deform the ultra-thin film. This ... >>

Petgugu Global cat litter 15.04.2024

Taking care of pets can often be a challenge, especially when it comes to keeping your home clean. A new interesting solution from the Petgugu Global startup has been presented, which will make life easier for cat owners and help them keep their home perfectly clean and tidy. Startup Petgugu Global has unveiled a unique cat toilet that can automatically flush feces, keeping your home clean and fresh. This innovative device is equipped with various smart sensors that monitor your pet's toilet activity and activate to automatically clean after use. The device connects to the sewer system and ensures efficient waste removal without the need for intervention from the owner. Additionally, the toilet has a large flushable storage capacity, making it ideal for multi-cat households. The Petgugu cat litter bowl is designed for use with water-soluble litters and offers a range of additional ... >>

The attractiveness of caring men 14.04.2024

The stereotype that women prefer "bad boys" has long been widespread. However, recent research conducted by British scientists from Monash University offers a new perspective on this issue. They looked at how women responded to men's emotional responsibility and willingness to help others. The study's findings could change our understanding of what makes men attractive to women. A study conducted by scientists from Monash University leads to new findings about men's attractiveness to women. In the experiment, women were shown photographs of men with brief stories about their behavior in various situations, including their reaction to an encounter with a homeless person. Some of the men ignored the homeless man, while others helped him, such as buying him food. A study found that men who showed empathy and kindness were more attractive to women compared to men who showed empathy and kindness. ... >>

Random news from the Archive

Concrete for building on Mars 24.03.2023

Engineers from the Manchester Institute have made a special concrete for construction on Mars. It is made up of dust and potato starch and is twice as strong as normal.

The new concrete has been named StarCrete and has a compressive strength of 72 megapascals.

Potato starch is mixed with Martian dust containing magnesium salt - this significantly increases the strength of concrete. Magnesium chloride itself is easy to mine on the surface of the Red Planet.

On Earth, it is impossible to build from such concrete - it will collapse due to water, but such material will be useful on Mars, where it does not rain.

Other interesting news:

▪ Device for contactless identification by fingerprint

▪ Textbooks should be hard to read

▪ 400-channel driver chip for electronic paper displays

▪ Sauna for the heart

▪ Home CHP

News feed of science and technology, new electronics

 

Interesting materials of the Free Technical Library:

▪ website section LEDs. Article selection

▪ article Hot water at any time. Tips for the home master

▪ article What was the name of the Cape of Good Hope in the beginning? Detailed answer

▪ article Vomiting root. Legends, cultivation, methods of application

▪ article Calculation of amplifiers with feedback. Encyclopedia of radio electronics and electrical engineering

▪ article A simplified version of a class B power amplifier. Encyclopedia of radio electronics and electrical engineering

Leave your comment on this article:

Name:


Email (optional):


A comment:





All languages ​​of this page

Home page | Library | Articles | Website map | Site Reviews

www.diagram.com.ua

www.diagram.com.ua
2000-2024