Menu English Ukrainian russian Home

Free technical library for hobbyists and professionals Free technical library


Lecture notes, cheat sheets
Free library / Directory / Lecture notes, cheat sheets

Computer science. General theoretical foundations of computer science (lecture notes)

Lecture notes, cheat sheets

Directory / Lecture notes, cheat sheets

Comments on the article Comments on the article

Table of contents (expand)

Topic 1. General theoretical foundations of computer science

1.1. The concept of informatics

Informatics (from French information - information + automatique - automation) has a wide range of applications. The main directions of this scientific discipline are:

▪ development of computer systems and software;

▪ information theory, which studies processes based on the transmission, reception, transformation and storage of information;

▪ methods that allow you to create programs for solving problems that require certain intellectual efforts when used by a person (logical inference, speech understanding, visual perception, etc.);

▪ system analysis, which consists of studying the purpose of the designed system and determining the requirements that it must meet;

▪ methods of animation, computer graphics, multimedia;

▪ telecommunications (global computer networks);

▪ various applications that are used in manufacturing, science, education, medicine, trade, agriculture, etc.

Most often, informatics is considered to consist of two types of means:

1) technical - computer equipment;

2) software - the whole variety of existing computer programs.

Sometimes there is another main branch - algorithmic tools.

In the modern world, the role of informatics is enormous. It covers not only the sphere of material production, but also the intellectual, spiritual aspects of life. The increase in the production of computer equipment, the development of information networks, the emergence of new information technologies significantly affect all spheres of society: production, science, education, medicine, culture, etc.

1.2. The concept of information

The word "information" in Latin means information, clarification, presentation.

Information is information about objects and phenomena of the surrounding world, their properties, characteristics and state, perceived by information systems. Information is not a characteristic of the message, but of the relationship between the message and its analyzer. If there is no consumer, at least a potential one, it makes no sense to talk about information.

In computer science, information is understood as a certain sequence of symbolic designations (letters, numbers, images and sounds, etc.), which carry a semantic load and are presented in a form understandable to a computer. Such a new character in such a sequence of characters increases the information content of the message.

1.3. Information coding system

Information coding is used to unify the form of presentation of data that belongs to different types, in order to automate work with information.

Encoding is the expression of data of one type through data of another type. For example, natural human languages ​​can be considered as systems for encoding concepts for expressing thoughts through speech, and alphabets are also systems for encoding language components using graphic symbols.

In computer technology, binary coding is used. The basis of this coding system is the representation of data through a sequence of two characters: 0 and 1. These characters are called binary digits (binary digit), or abbreviated bit (bit). One bit can encode two concepts: 0 or 1 (yes or no, true or false, etc.). With two bits it is possible to express four different concepts, and with three bits it is possible to encode eight different values.

The smallest unit of information encoding in computer technology after a bit is a byte. Its relationship to a bit reflects the following relationship: 1 byte = 8 bits = 1 character.

Usually one byte encodes one character of textual information. Based on this, for text documents, the size in bytes corresponds to the lexical size in characters.

A larger unit of encoding information is a kilobyte, related to a byte by the following ratio: 1 Kb = 1024 bytes.

Other, larger, information encoding units are symbols obtained by adding the prefixes mega (Mb), giga (GB), tera (Tb):

1 MB = 1 bytes;

1 GB = 10 bytes;

1 TB = 1024 GB.

To encode an integer in binary, take the integer and divide it in half until the quotient is equal to one. The set of remainders from each division, which is written from right to left along with the last quotient, will be the binary analogue of a decimal number.

In the process of encoding integers from 0 to 255, it is enough to use 8 bits of binary code (8 bits). Using 16 bits you can encode integers from 0 to 65, and using 535 bits you can encode more than 24 million different values.

In order to encode real numbers, 80-bit encoding is used. In this case, the number is first converted to a normalized form, for example:

2,1427926 = 0,21427926? 101;

500 = 000? 0,5.

The first part of the encoded number is called the mantissa, and the second part is the characteristics. The main part of 80 bits is reserved for storing the mantissa, and some fixed number of bits are reserved for storing the characteristic.

1.4. Encoding of text information

Textual information is encoded in binary code through the designation of each character of the alphabet by a certain integer. Using eight binary digits, it is possible to encode 256 different characters. This number of characters is enough to express all characters of the English and Russian alphabets.

In the early years of the development of computer technology, the difficulties of coding textual information were caused by the lack of the necessary coding standards. At present, on the contrary, the existing difficulties are associated with a multitude of simultaneously operating and often conflicting standards.

For English, which is an unofficial international medium of communication, these difficulties have been resolved. The US Standards Institute developed and introduced the ASCII (American Standard Code for Information Interchange) coding system.

To encode the Russian alphabet, several encoding options have been developed:

1) Windows-1251 - introduced by Microsoft; given the widespread use of operating systems (OS) and other software products of this company in the Russian Federation, it has become widespread;

2) KOI-8 (Information Exchange Code, eight-digit) - another popular encoding of the Russian alphabet, common in computer networks on the territory of the Russian Federation and in the Russian sector of the Internet;

3) ISO (International Standard Organization - International Institute for Standardization) - an international standard for encoding characters in the Russian language. In practice, this encoding is rarely used.

A limited set of codes (256) creates difficulties for developers of a unified system for encoding textual information. As a result, it was proposed to encode characters not with 8-bit binary numbers, but with numbers with a large bit, which caused an expansion of the range of possible code values. The 16-bit character encoding system is called universal - UNICODE. Sixteen digits allows for unique codes for 65 characters, which is enough to fit most languages ​​in one character table.

Despite the simplicity of the proposed approach, the practical transition to this encoding system could not be implemented for a very long time due to the lack of computer resources, since in the UNICODE encoding system all text documents automatically become twice as large. In the late 1990s technical means have reached the required level, a gradual transfer of documents and software to the UNICODE coding system has begun.

1.5. Graphic information encoding

There are several ways to encode graphic information.

When examining a black-and-white graphic image with a magnifying glass, it is noticeable that it includes several tiny dots that form a characteristic pattern (or raster). Linear coordinates and individual properties of each of the image points can be expressed using integers, so the raster encoding method is based on the use of a binary code for representing graphic data. The well-known standard is the reduction of black-and-white illustrations in the form of a combination of points with 256 shades of gray, i.e. 8-bit binary numbers are needed to encode the brightness of any point.

The coding of color graphic images is based on the principle of decomposing an arbitrary color into basic components, which are used as three primary colors: red (Red), green (Green) and blue (Blue). In practice, it is accepted that any color that the human eye perceives can be obtained using a mechanical combination of these three colors. This coding system is called RGB (by the first letters of the primary colors). When 24 bits are used to encode color graphics, this mode is called True Color.

Each of the primary colors is mapped to a color that complements the primary color to white. For any of the primary colors, the complementary color will be the one that is formed by the sum of a pair of other primary colors. Accordingly, among the additional colors, cyan (Cyan), magenta (Magenta) and yellow (Yellow) can be distinguished. The principle of decomposition of an arbitrary color into its constituent components is used not only for primary colors, but also for additional ones, that is, any color can be represented as the sum of cyan, magenta and yellow components. This color coding method is used in printing, but it also uses the fourth ink - black (Black), so this coding system is indicated by four letters - CMYK. To represent color graphics in this system, 32 bits are used. This mode is also called full color.

By reducing the number of bits used to encode the color of each point, the amount of data is reduced, but the range of encoded colors is noticeably reduced. Encoding color graphics with 16-bit binary numbers is called the High Color mode. When encoding graphic color information using 8 bits of data, only 256 shades can be transmitted. This color coding method is called index.

1.6. Audio encoding

Currently, there is no single standard system for encoding audio information, since the techniques and methods for working with audio information have begun to develop compared to the latest methods for working with other types of information. Therefore, many different companies that work in the field of information encoding have created their own corporate standards for audio information. But among these corporate standards, two main areas stand out.

The FM (Frequency Modulation) method is based on the assertion that theoretically any complex sound can be represented as a decomposition into a sequence of simple harmonic signals of different frequencies. Each of these harmonic signals is a regular sine wave and can therefore be described numerically or encoded. Sound signals form a continuous spectrum, that is, they are analog, therefore their decomposition into harmonic series and presentation in the form of discrete digital signals is carried out using special devices - analog-to-digital converters (ADC). The reverse conversion, which is necessary to reproduce sound encoded with a numerical code, is performed using digital-to-analog converters (DACs). Due to such transformations of audio signals, there is a loss of information that is associated with the coding method, so the quality of sound recording using the FM method is usually not satisfactory enough and corresponds to the sound quality of the simplest electric musical instruments with a color characteristic of electronic music. At the same time, this method provides a completely compact code, so it was widely used in those years when the resources of computer technology were clearly insufficient.

The main idea of ​​the wave-table synthesis method (Wave-Table) is that in pre-prepared tables there are sound samples for many different musical instruments. These sound samples are called samples. The numerical codes that are embedded in the sample express such characteristics as the type of instrument, its model number, pitch, duration and intensity of the sound, the dynamics of its change, some components of the environment in which the sound is observed, and other parameters that characterize the features of the sound. Since real sounds are used for the samples, the quality of the encoded sound information is very high and approaches the sound of real musical instruments, which is more in line with the current level of development of modern computer technology.

1.7. Modes and methods of information transfer

For correct data exchange between nodes of a local area network, certain modes of information transfer are used:

1) simplex (unidirectional) transmission;

2) half-duplex transmission, in which the reception and transmission of information by the source and receiver are carried out alternately;

3) duplex transmission, in which parallel simultaneous transmission is performed, i.e. each station simultaneously transmits and receives data.

In information systems, duplex or serial data transmission is very often used. Allocate synchronous and asynchronous methods of serial data transmission.

The synchronous method differs in that the data is transferred in blocks. To synchronize the operation of the receiver and transmitter, synchronization bits are sent at the beginning of the block. After that, the data, the error detection code and the symbol indicating the end of the transfer are transmitted. This sequence forms the standard data transmission scheme for the synchronous method. In the case of synchronous transmission, data is transmitted both as symbols and as a stream of bits. The error detection code is most often a cyclic redundant error detection code (CRC), which is determined by the contents of the data field. With its help, you can unambiguously determine the reliability of the received information.

The advantages of the synchronous data transfer method include:

▪ high efficiency;

▪ reliable built-in error detection mechanism;

▪ high data transfer speed.

The main disadvantage of this method is the expensive interface hardware.

The asynchronous method differs in that each character is transmitted in a separate package. The start bits alert the receiver to the start of transmission, after which the character itself is transmitted. The parity bit is used to determine the validity of the transmission. The parity bit is one when the number of ones in a character is odd, and zero when there are even ones. The last bit, called the "stop bit", signals the end of the transmission. This sequence forms the standard data transfer scheme for an asynchronous method.

The advantages of the asynchronous transfer method are:

▪ inexpensive (compared to synchronous) interface equipment;

▪ simple proven transmission system.

The disadvantages of this method include:

▪ loss of a third of the bandwidth for transmitting service bits;

▪ low transmission speed compared to the synchronous method;

▪ inability to determine the reliability of the received information using the parity bit in case of multiple errors.

The asynchronous transfer method is used in systems in which data exchange occurs from time to time and a high data transfer rate is not required.

1.8. Information Technology

Information is one of the most valuable resources of society, so the process of its processing, as well as material resources (for example, oil, gas, minerals, etc.), can be perceived as a kind of technology. In this case, the following definitions will be valid.

Information resources are a collection of data that are of value to an enterprise (organization) and act as material resources. These include texts, knowledge, data files, etc.

Information technology is a set of methods, production processes and software and hardware tools that are combined into a technological chain. This chain ensures the collection, storage, processing, output and dissemination of information in order to reduce the complexity of using information resources, as well as increase their reliability and efficiency.

According to the definition adopted by UNESCO, information technology is a set of interrelated, scientific, technological and engineering disciplines that study the methods of effective organization of the work of people who are engaged in the processing and storage of information, as well as computer technology and methods of organizing and interacting with people and production equipment.

The system of methods and production processes defines the techniques, principles and activities that regulate the design and use of software and hardware for data processing. Depending on the specific application tasks that need to be solved, various data processing methods and technical means are used. There are three classes of information technologies that allow you to work with various kinds of subject areas:

1) global, including models, methods and tools that formalize and allow the use of information resources of society as a whole;

2) basic, designed for a specific area of ​​application;

3) specific, realizing the processing of certain data when solving the functional tasks of the user (in particular, the tasks of planning, accounting, analysis, etc.).

The main purpose of information technology is the production and processing of information for its analysis and the adoption of an appropriate decision on its basis, which provides for the implementation of any action.

1.9. Stages of information technology development

There are several points of view on the development of information technology with the use of computers. The staging is carried out on the basis of the following signs of division.

Allocation of stages on the problems of the process of informatization of society:

1) until the end of the 1960s. - the problem of processing large amounts of information in conditions of limited hardware capabilities;

2) until the end of the 1970s. - backlog of software from the level of development of hardware;

3) since the early 1980s. - problems of maximum satisfaction of the user's needs and the creation of an appropriate interface for working in a computer environment;

4) since the early 1990s. - development of an agreement and establishment of standards, protocols for computer communications, organization of access to strategic information, etc.

Allocation of stages according to the advantage brought by computer technology:

1) since the early 1960s. - efficient processing of information when performing routine work with a focus on centralized collective use of computing center resources;

2) since the mid-1970s. - the emergence of personal computers (PCs). At the same time, the approach to creating information systems has changed - the orientation is shifting towards the individual user to support his decisions. Both centralized and decentralized data processing is used;

3) since the early 1990s. - development of telecommunication technology for distributed information processing. Information systems are used to help an organization fight competitors.

Allocation of stages by types of technology tools:

1) until the second half of the XNUMXth century. - "manual" information technology, in which the tools were pen, ink, paper;

2) from the end of the XNUMXth century. - "mechanical" technology, the tools of which were a typewriter, telephone, voice recorder, mail;

3) 1940-1960s XNUMXth century - "electrical" technology, the tools of which were large electronic computers (computers) and related software, electric typewriters, photocopiers, portable voice recorders;

4) since the early 1970s. - "electronic" technology, the main tools are large computers and automated control systems (ACS) and information retrieval systems (IPS) created on their basis, which are equipped with a wide range of software systems;

5) since the mid-1980s. - "computer" technology, the main toolkit is a PC with a wide range of standard software products for various purposes.

1.10. The advent of computers and computer technology

For many centuries, people have been trying to create various devices to facilitate calculations. In the history of the development of computers and computer technologies, there are several important events that have become decisive in the further evolution.

In the 40s. XNUMXth century B. Pascal invented a mechanical device that could be used to add numbers.

At the end of the XVIII century. G. Leibniz created a mechanical device for adding and multiplying numbers.

In 1946, the first mainframe computers were invented. American scientists J. von Neumann, G. Goldstein and A. Berne published a work in which they presented the basic principles of creating a universal computer. Since the late 1940s. the first prototypes of such machines, conventionally called first-generation computers, began to appear. These computers were made on vacuum tubes and lagged behind modern calculators in terms of performance.

In the further development of computers, the following stages are distinguished:

▪ second generation of computers - the invention of transistors;

▪ third generation of computers - creation of integrated circuits;

▪ fourth generation of computers - the emergence of microprocessors (1971).

The first microprocessors were produced by Intel, which led to the emergence of a new generation of PCs. Due to the mass interest in such computers that arose in society, IBM (International Business Machines Corporation) developed a new project to create them, and Microsoft developed software for this computer. The project ended in August 1981, and the new PC became known as the IBM PC.

The developed computer model became very popular and quickly ousted all previous IBM models from the market in the next few years. With the invention of the IBM PC, the standard IBM PC-compatible computers began to be produced, which make up the majority of the modern PC market.

In addition to IBM PC-compatible computers, there are other types of computers designed to solve problems of varying complexity in various areas of human activity.

1.11. The evolution of the development of personal computers

The development of microelectronics led to the emergence of microminiature integrated electronic elements that replaced semiconductor diodes and transistors and became the basis for the development and use of PCs. These computers had a number of advantages: they were compact, easy to use and relatively cheap.

In 1971, Intel created the i4004 microprocessor, and in 1974, the i8080, which had a huge impact on the development of microprocessor technology. This company to this day remains the market leader in the production of microprocessors for PCs.

Initially, PCs were developed on the basis of 8-bit microprocessors. One of the first manufacturers of computers with a 16-bit microprocessor was IBM, until the 1980s. specializing in the production of large computers. In 1981, she first released a PC that used the principle of an open architecture, which made it possible to change the configuration of the computer and improve its properties.

In the late 1970s and other large companies in leading countries (USA, Japan, etc.) began to develop PCs based on 16-bit microprocessors.

In 1984, Apple's TIKMacintosh appeared - a competitor to IBM. In the mid 1980s. computers based on 32-bit microprocessors were released. 64-bit systems are currently available.

According to the type of values ​​of the main parameters and taking into account the application, the following groups of computer equipment are distinguished:

▪ supercomputer - a unique ultra-efficient system used to solve complex problems and large calculations;

▪ server - a computer that provides its own resources to other users; there are file servers, print servers, database servers, etc.;

▪ personal computer - a computer designed for use in the office or at home. The user can configure, maintain and install software for this type of computer;

▪ professional workstation - a computer with enormous performance and designed for professional work in a certain area. Most often it is supplied with additional equipment and specialized software;

▪ laptop - a portable computer with the computing power of a PC. It can function for some time without power from the electrical network;

▪ a pocket PC (electronic organizer), no larger in size than a calculator, keyboard or keyboardless, similar in functionality to a laptop;

▪ network PC - a computer for business use with a minimum set of external devices. Operation support and software installation are carried out centrally. It is also used to work in a computer network and to function offline;

▪ terminal - a device used when working in offline mode. The terminal does not contain a processor for executing commands; it only performs operations of entering and transmitting user commands to another computer and returning the result to the user.

The market for modern computers and the number of machines produced are determined by market needs.

1.12. Structure of modern computing systems

In the structure of today's PC such as the IBM PC, there are several main components:

▪ a system unit that organizes work, processes information, makes calculations, and ensures communication between a person and a computer. The PC system unit includes a motherboard, speaker, fan, power supply, two disk drives;

▪ system (motherboard) board, which consists of several dozen integrated circuits for various purposes. The integrated circuit is based on a microprocessor, which is designed to perform calculations on a program stored in a storage device and general control of the PC. The speed of a PC depends on the speed of the processor;

▪ PC memory, which is divided into internal and external:

a) internal (main) memory is a storage device associated with the processor and designed to store used programs and data that are involved in calculations. Internal memory is divided into operational (random access memory - RAM) and permanent (read-only memory - ROM). Random access memory is intended for receiving, storing and issuing information, and permanent memory is for storing and issuing information;

b) external memory (external storage device - ESD) is used to store large amounts of information and exchange it with RAM. By design, the VCUs are separated from the central PC devices;

▪ audio card (audio card), used for playing and recording sound;

▪ video card (video card), which provides playback and recording of a video signal.

External input devices in a PC include:

a) keyboard - a set of sensors that perceive pressure on the keys and close some electrical circuit;

b) mouse - a manipulator that simplifies the work with most computers. There are mechanical, optical-mechanical and optical mice, as well as wired and wireless;

c) scanner - a device that allows you to enter text, pictures, photographs, etc. into a computer in graphical form.

External information output devices are:

a) a monitor used to display various types of information on the screen. Monitor screen size is measured in inches as the distance between the bottom left and top right corners of the screen;

b) a printer used to print text and graphics prepared on a computer. There are dot matrix, inkjet and laser printers.

External input devices are used to make information that the user has available to the computer. The main purpose of an external output device is to present the available information in a form accessible to the user.

Author: Kozlova I.S.

<< Back: Symbols

>> Forward: Computer technologies for information processing (Classification and design of computers. Computer architecture. Memory in personal computers. The concept of command and system software of a computer. Basic input-output system (BIOS). The concept of CMOS RAM)

We recommend interesting articles Section Lecture notes, cheat sheets:

Bank audit. Crib

Regional studies. Lecture notes

Philosophy of science and technology. Lecture notes

See other articles Section Lecture notes, cheat sheets.

Read and write useful comments on this article.

<< Back

Latest news of science and technology, new electronics:

The existence of an entropy rule for quantum entanglement has been proven 09.05.2024

Quantum mechanics continues to amaze us with its mysterious phenomena and unexpected discoveries. Recently, Bartosz Regula from the RIKEN Center for Quantum Computing and Ludovico Lamy from the University of Amsterdam presented a new discovery that concerns quantum entanglement and its relation to entropy. Quantum entanglement plays an important role in modern quantum information science and technology. However, the complexity of its structure makes understanding and managing it challenging. Regulus and Lamy's discovery shows that quantum entanglement follows an entropy rule similar to that for classical systems. This discovery opens new perspectives in the field of quantum information science and technology, deepening our understanding of quantum entanglement and its connection to thermodynamics. The results of the study indicate the possibility of reversibility of entanglement transformations, which could greatly simplify their use in various quantum technologies. Opening a new rule ... >>

Mini air conditioner Sony Reon Pocket 5 09.05.2024

Summer is a time for relaxation and travel, but often the heat can turn this time into an unbearable torment. Meet a new product from Sony - the Reon Pocket 5 mini-air conditioner, which promises to make summer more comfortable for its users. Sony has introduced a unique device - the Reon Pocket 5 mini-conditioner, which provides body cooling on hot days. With it, users can enjoy coolness anytime, anywhere by simply wearing it around their neck. This mini air conditioner is equipped with automatic adjustment of operating modes, as well as temperature and humidity sensors. Thanks to innovative technologies, Reon Pocket 5 adjusts its operation depending on the user's activity and environmental conditions. Users can easily adjust the temperature using a dedicated mobile app connected via Bluetooth. Additionally, specially designed T-shirts and shorts are available for convenience, to which a mini air conditioner can be attached. The device can oh ... >>

Energy from space for Starship 08.05.2024

Producing solar energy in space is becoming more feasible with the advent of new technologies and the development of space programs. The head of the startup Virtus Solis shared his vision of using SpaceX's Starship to create orbital power plants capable of powering the Earth. Startup Virtus Solis has unveiled an ambitious project to create orbital power plants using SpaceX's Starship. This idea could significantly change the field of solar energy production, making it more accessible and cheaper. The core of the startup's plan is to reduce the cost of launching satellites into space using Starship. This technological breakthrough is expected to make solar energy production in space more competitive with traditional energy sources. Virtual Solis plans to build large photovoltaic panels in orbit, using Starship to deliver the necessary equipment. However, one of the key challenges ... >>

Random news from the Archive

DNA changes in 1 hour 01.06.2013

ACS Synthetic Biology published news about an experiment that opens up a new and simple way to create artificial and genetically modified organisms. The new technology, called clonetegration, allows DNA modification to be carried out in a single step, making the work of a genetic engineer easier and faster.

The integration of a piece of DNA into the genome of a bacterium is the main tool of genetic engineering, which makes special bacteria for various applications, such as making biofuel from waste or cleaning up oil spills. Unfortunately, modern methods of genetic engineering are very time-consuming, include many stages, and besides, they have many limitations in terms of the possibilities of "altering" organisms, and even more so, creating new ones. To eliminate these problems, a group of researchers led by Keith Sherwin has developed a new technology called clonetegration, which is based on the cloning of genes or DNA fragments.

The new technology allows the introduction of DNA fragments in just one step, which distinguishes it from any other modern methods. During laboratory tests of clonetegration technology, scientists have successfully modified the bacteria E. coli and Salmonella Typhimurium, which are widely used in biotechnology. The new method of genetic engineering is very fast, efficient and allows several genes to be integrated into the genome at the same time, which makes it possible to efficiently create synthetic biological systems.

Other interesting news:

▪ LMH6533 - laser diode driver

▪ New frost protection technology

▪ MOTOROLA taught TVs to understand human speech

▪ plastic bridge

▪ Smartphone Fujitsu Arrows A 202F

News feed of science and technology, new electronics

 

Interesting materials of the Free Technical Library:

▪ Modeling section of the site. Selection of articles

▪ article My glass is not big, but I drink from my glass. Popular expression

▪ article What is idealism? Detailed answer

▪ article Pharmacist-analyst and pharmacist-technologist in quality control of medicines. Standard instruction on labor protection

▪ article Keychain of emotional attunement. Encyclopedia of radio electronics and electrical engineering

▪ Barometer article. physical experiment

Leave your comment on this article:

Name:


Email (optional):


A comment:





All languages ​​of this page

Home page | Library | Articles | Website map | Site Reviews

www.diagram.com.ua

www.diagram.com.ua
2000-2024