Welcome To My Blog

#History_of _Computer? / #what_is_computer?


The first mechanical calculating machine, a precursor to the digital computer, was invented in 1642 by the French mathematician Blaise Pascal. That device used a series of ten-tooth wheels with each tooth representing a digit from 0 to 9. The wheels were connected in such a way that numbers could be added together by advancing the correct number of teeth. In 1670 the German philosopher and mathematician Gottfried Wilhelm Leibniz perfected this machine and invented one that could also multiply.

The French inventor Joseph Marie Jacquard, when designing an automatic loom, used thin perforated wooden plates to control the fabric used in complex designs. During the 1880s the American statistician Herman Hollerith conceived the idea of ​​using punched cards, similar to Jacquard plates, to process data. Hollerith managed to compile statistical information for the 1890 population census of the United States by using a system that passed punch cards over electrical contacts.

The analytical machine:


Also in the 19th century, the British mathematician and inventor Charles Babbage elaborated on the principles of the modern digital computer. He invented a series of machines, like the differential machine, designed to solve complex mathematical problems. Babbage and his partner, the British mathematician Augusta Ada Byron (1815-1852), daughter of the English poet Lord Byron, are considered by many historians to be the true inventors of the modern digital computer. The technology of that time was not capable of translating its correct concepts into practice; but one of his inventions, the analytical machine, already had many of the characteristics of the modern computer. It included a stream, or input stream in the form of a punch card package, a memory for storing data, a processor for math, and a printer for permanent recording.

First computers:


Analog computers began to be built in the early 1900s. The first models made calculations using shafts and rotating gears. With these machines, the numerical approximations of equations too difficult to be solved by other methods were evaluated. During the two world wars, analog computer systems, first mechanical and later electrical, were used to predict the trajectory of torpedoes in submarines and to remotely control bombs in aviation.

Electronic computers:



During World War II (1939-1945), a team of scientists and mathematicians working in Bletchley Park, North London, created what was considered the first fully electronic digital computer - the Colossus. By December 1943 the Colossus, which incorporated 1,500 valves or vacuum tubes, was already operational. It was used by the team led by Alan Turing to decode encrypted radio messages from the Germans. In 1939 and independently of this project, John Atanasoff and Clifford Berry had already built a prototype electronic machine at Iowa State College (USA). This prototype and subsequent investigations were carried out anonymously and were later overshadowed by the development of the Electronic Digital Numerical Integrator and Calculator (ENIAC) in 1945. The ENIAC, which the evidence showed was largely computer-based Atanasoff-Berry (ABC, acronym for Electronic Numerical Integrator and Computer), obtained a patent that expired in 1973, several decades later.

The ENIAC contained 18,000 vacuum valves and had a speed of several hundred multiplications per minute, but its program was connected to the processor and had to be modified manually. An ENIAC successor was built with program storage that was based on the concepts of the Hungarian-American mathematician John von Neumann. The instructions were stored within a memory cell, which freed the computer from the speed limitations of the paper tape reader during execution and allowed to solve problems without reconnecting to the computer.

The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logic elements than valve machines allowed. Since transistors use much less energy and have a longer lifespan, their development was due to the birth of more sophisticated machines, which were called computers or second-generation computers. The components became smaller, as well as the spaces between them, making the system cheaper to manufacture.

Integrated circuits


At the end of the 1960s, the integrated circuit (IC) appeared, which made it possible to manufacture several transistors on a single silicon substrate on which the interconnection cables were soldered. The integrated circuit allowed a subsequent reduction in price, size and error rates. The microprocessor became a reality in the mid-1970s with the introduction of the Large Scale Integrated Integration Circuit (LSI), and later with the Large Scale Integration Circuit (VLSI, Acronym for Very Large Scale Integrated), with several thousand interconnected transistors soldered on a single silicon substrate.

COMPUTER HISTORY:

 From Abacus to punch card:

 THE ABACUS; perhaps it was the first mechanical accounting device to exist. It has been calculated that it originated at least 5000 years ago and its effectiveness has stood the test of time.

 THE PASCALINA; The inventor and painter Leonardo Da Vencí (1452-1519) laid out the ideas for a mechanical adder. A century and a half later, the French philosopher and mathematician Balicé Pascal (1623-1662) finally invented and built the first mechanical adder. It was called Pascalina and it worked as machinery based on gears and wheels. Despite the fact that Pascal was praised throughout Europe for his achievements, Pascaline was a heartbreaking financial failure, because, at that time, it was more expensive than human labor for arithmetic calculations.

 THE MADNESS OF BABBAGE Charles Babbage (1793-1871), an English visionary and Cambridge professor, could have accelerated the development of computers if he and his inventive mind had been born 100 years later. He advanced the situation of computational hardware by inventing the "difference machine", capable of calculating mathematical tables. In 1834, when working on the advances of the different machine Babbage conceived the idea of ​​an "analytical machine".

 In essence, this was a general-purpose computer. Consistent with its design, Babbage's analytical machine could add, subtract, multiply, and divide in automatic sequence at a rate of 60 sums per minute. The design required thousands of gears and mechanisms that would cover the area of ​​a soccer field and would need to be powered by a locomotive. Skeptics nicknamed him "Babbage's Folly". Charles Babbage worked on his analytical machine until his death.

 Babbage's detailed strokes described the features now incorporated into the modern electronic computer. If Babbage had lived in the age of electronic technology and precision parts, he would have anticipated the birth of the electronic computer by several decades. Ironically, his work was so neglected that some pioneers in the development of the electronic computer completely ignored his concepts of memory, printers, punch cards, and sequence program control.

THE FIRST PERFORATED CARD; The weaving loom, invented in 1801 by the French Joseph-Marie Jacquard (1753-1834), still used today, is controlled by punch cards. Jackard's loom operates as follows: card cards are strategically punched and arranged in a certain sequence to indicate a particular fabric design. Charles Babbage wanted to apply the Jacquard Loom Punched Cards concept to his analytical engine. In 1843 Lady Ada Augusta Lovelace suggested the idea that punched cards could be adapted in such a way that they would allow Babbage's engine to repeat certain operations. Due to this suggestion, some people consider Lady Lovelace the first programmer.

 Herman Hollerith (1860-1929) The US Census Bureau did not complete the 1880 census until 1888. The office leadership had already concluded that the ten-year census would take longer than the same 10 years to complete. The census bureau commissioned statistician Herman Hollerith to apply his punch card experience and conduct the 1890 census.

 With Hollerith's punch card processing and punch-card tabulator, the census was completed in just 3 years, and the office saved around $ 5,000,000. So started the automatic data processing. Hollerith did not take the idea of ​​punch cards from Jackard's invention, but from the "punch picture". Some railways of the time issued tickets with physical descriptions of the passenger; the drivers made holes in the tickets that described the passenger's hair color, eye color, and nose shape. That gave Hollerith the idea to make the perforated photograph of each person to be tabulated.

 Hollerith founded the Tabulating Machine Company and sold its products worldwide. The demand for its machines even extended to Russia. The first census carried out in Russia in 1897, was recorded with the Hollerith Tabulator. In 1911, the Tabulating Machine Company, by joining with other Companies, formed the Computing-Tabulating-Recording-Company. ELECTROMECHANICAL ACCOUNTING MACHINES (MEC) The results of tabulating machines had to be brought up to date by manual means, until in 1919 the Computing-Tabulating-Recording-Company. announced the appearance of the printer/enumerator. This innovation revolutionized the way the Companies conducted their operations.
To better reflect the scope of its business interests, in 1924 the Company changed its name to international Bussines Machines Corporation (IBM). For decades, since the mid-1950s, the punch card technology was perfected with the introduction of more devices with more complex capabilities. Since each card generally contained a record (A name, address, etc.), punch card processing was also known as unit record processing. The family of electro-mechanical accounting machines (EAM) electromechanical accounting machines of punched card devices comprises card puncher, verifier, player, summary punch, interpreter, classifier, collator, calculator and machine accounting. The machine room operator at a punch card facility had a physically demanding job. Some machine rooms resembled the activity of a factory; punch cards and printed outputs were switched from one device to another in manual carts, the noise it produced was as intense as that of an automobile assembly plant.

Pioneers of computing:


 ATANASOFF AND BERRY An old patent for a device that many people believed was the first electronic digital computer, was invalidated in 1973 by order of a federal court, and John V. Atanasoff was officially credited as the inventor of the computer. to electronic digital. Dr. Atanasoff, a professor at Iowa State University, developed the first electronic digital computer between the years 1937 to 1942. He called his invention the Atanasoff-Berry computer, or just ABC (Atanasoff Berry Computer). A graduate student, Clifford Berry, was a helpful aid in building the ABC computer.

 Some authors consider that there is not a single person to whom the invention of the computer can be attributed, but it was the effort of many people. However, in the old Physics building of the University of Iowa, there is a plaque with the following legend: "The first electronic digital computer of automatic operation in the world was built in this building in

1939 by John Vincent Atanasoff, mathematician and physicist at the University College, who conceived the idea, and by Clifford Edward Berry, a graduate student in physics. "

 Mauchly and Eckert, after several conversations with Dr. Atanasoff, reading notes describing the principles of the ABC computer, and seeing her in person, Dr. John W. Mauchly collaborated with J. Presper Eckert, Jr. to develop a machine that calculates trajectory tables for our army. The final product, a large-scale fully operational electronic computer, was completed in 1946 and was called ENIAC (Electronic Numerical Integrator And Computer), or Electronic Numerical Integrator and Calculator.

The ENIAC built for World War II applications was completed in 30 months by a team of scientists working under the clock. ENIAC, a thousand times faster than its electromechanical predecessors, burst forth as a major discovery in computing technology. It weighed 30 tons and occupied a space of 450 square meters, filled a room of 6 m x 12 m and with 18,000 bulbs, it had to be programmed manually by connecting it to 3 boards that contained more than 6,000 switches. Entering a new program was a very tedious process that required days or even weeks. Unlike current computers that operate with a binary system (0,1), ENIAC operated with a decimal (0,1,2..9). ENIAC required a large amount of electricity. Legend has it that the ENIAC, built at the University of Pennsylvania, turned down the lights in Philadelphia whenever it was activated. The impressive scale and numerous general applications of ENIAC signaled the beginning of the first generation of computers.

 In 1945 John von Neumann, who had worked with Eckert and Mauchly at the University of Pennsylvania, published an article about program storage. The concept of the stored-program allowed the reading of a program in the memory of the computer, and then the execution of its instructions without having to write them again. The first computer to use the aforementioned concept was the so-called EDVAC (Electronic Discrete-Variable Automatic Computer, that is, electronic automatic computer with discrete variable), developed by Von Neumann, Eckert, and Mauchly.

 Stored programs gave computers tremendous flexibility and reliability, making them faster and less error-prone than mechanical programs. A stored-program capable computer could be used for various applications by loading and running the appropriate program. Up to this point, programs and data could be entered into the computer only with binary notation, which is the only code that computers "understand".

The next major development in computer design was interpreter programs, which allowed people to communicate with computers using means other than binary numbers. In 1952 Grace Murray Hoper, a U.S. Navy officer, developed the first compiler, a program that can translate English-like statements into a machine-readable binary code called COBOL (COmmon Business-Oriented Langu aje).

 Computer generations


 First Generation of Computers


 (from 1951 to 1958) First Generation computers used bulbs to process information. Operators entered data and programs in special code using punch cards. Internal storage was accomplished with a rapidly rotating drum, on which a read/write device placed magnetic marks. Those bulb computers were much larger and generated more heat than contemporary models.

 Eckert and Mauchly contributed to the development of 1st Generation computers by forming a private company and building UNIVAC I, which the Census Committee used to evaluate the 1950s. IBM had a monopoly on card-based data processing equipment. pierced and was having a big boom in products like meat slicers, grocery scales, watches, and other items; however, he had not obtained the contract for the 1950 Census.
He then started building electronic computers and his first entry was with IBM 701 in 1953. After a slow but exciting start, the IBM 701 became a commercially viable product. However, in 1954, the IBM 650 model was introduced, which is why IBM today enjoys a large share of the computer market. IBM management took a big risk and estimated a sale of 50 computers. This number was greater than the number of computers installed at that time in the U.S. In fact, IBM installed 1,000 computers. The rest is history. Although expensive and of limited use, computers were quickly accepted by private and government companies. In the mid-1950s IBM and Remington Rand established themselves as leaders in computer manufacturing.

 Second generation


 (1959-1964) Transistor Limited Compatibility The invention of the transistor made possible a new generation of computers, faster, smaller, and with less need for ventilation. However, the cost was still a significant portion of a Company's budget. Second-generation computers also used magnetic core networks instead of rotating drums for primary storage. These nuclei contained small rings of magnetic material, linked together, in which data and instructions could be stored.

Computer programs also improved. COBOL developed during the 1st generation was already commercially available. Programs written for one computer could be transferred to another with minimal effort. Writing a program no longer required a full understanding of computer hardware. 2nd Generation computers were substantially smaller and faster than bulb computers and were used for new applications, such as systems for airline reservations, air traffic control, and general-purpose simulations.

Companies began to apply computers to record storage tasks, such as inventory management, payroll, and accounting. The U.S. Navy used Second Generation computers to create the first flight simulator (Whirlwind I). Honeywell was ranked as the first competitor during the second generation of computers. Burroughs, Univac, NCR, CDC, HoneyWell, IBM's biggest competitors during the 1960s became known as the BUNCH group.

 Third generation


 (1964-1971) integrated circuits Compatibility with older equipment Multiprogramming Minicomputer Third generation computers emerged with the development of integrated circuits (silicon chips) in which thousands of electronic components are placed, in a miniature integration. Computers again became smaller, faster, gave off less heat, and were more energy efficient. Before the advent of integrated circuits, computers were designed for mathematical or business applications, but not for both.

Integrated circuits allowed computer manufacturers to increase the flexibility of programs, and to standardize their models. The IBM 360, one of the first commercial computers to use integrated circuits, could perform both numerical analysis and file management or processing. Customers were able to scale their 360 systems to larger IBM models and still be able to run their current programs. Computers worked at such speeds that they provided the ability to run more than one program simultaneously (multiprogramming).

 For example, the computer could be calculating the payroll and accepting orders at the same time. Minicomputers, With the introduction of the 360 ​​models IBM captured 70% of the market, to avoid competing directly with IBM, the company Digital Equipment Corporation DEC redirected its efforts towards small computers. Much less expensive to buy and operate than

Large computers, Mini Computers were developed during the second generation but reached their peak between 1960 and 70.

The fourth generation


 (1971 to date)

Microprocessor
Memory chips.
Microminiaturization
 Two improvements in computer technology mark the start of the fourth generation: the replacement of memories with magnetic cores, by those of silicon chips and the placement of many more components in a Chic: product of the micro-miniaturization of the circuits electronic. The small size of the Chips microprocessor made it possible to create personal computers. (PC) Today's LSI (Large Scale Integration) and VLSI (Very Large Scale Integration) technologies allow hundreds of thousands of electronic components to be stored in one clip. Using VLSI, a manufacturer can make a small computer rival a first-generation computer that takes up an entire room.

 Classification of computers:

Supercomputers
Macrocomputers
Minicomputers
Microcomputers or PC´s
 Supercomputers:

A supercomputer is the fastest and most powerful type of computer that exists at any given time. These machines are designed to process huge amounts of information in a short time and are dedicated to a specific task. They are also the most expensive, their prices reach 30 MILLION dollars and more, and they have special temperature control, this to dissipate the heat that some components can have. Some examples of tasks to which supercomputers are exposed are the following:

 1. Search and study of nuclear energy and weapons.

2. Search for oil fields with large seismic databases.

3. The study and prediction of tornadoes.

4. The study and prediction of the climate of any part of the world.

5. The elaboration of models and projects of the creation of airplanes, flight simulators. Etc.

 Due to their price, very few supercomputers are built in a year. Macrocomputers or Mainframes.

macrocomputers:


 Macrocomputers are also known as Mainframes. Mainframes are large, fast, and expensive systems that are capable of controlling hundreds of users simultaneously, as well as hundreds of input and output devices. Mainframes cost from $ 350,000 to several million dollars.

 Somehow mainframes are more powerful than supercomputers because they support more programs simultaneously. BUT supercomputers can run a single program faster than a mainframe. In the past, Mainframes occupied entire rooms or even entire floors of a building, today, a Mainframe is similar to a row of filing cabinets in a room with a false floor, this to hide the hundreds of peripheral cables, and their temperature has to be controlled.

Minicomputers:

 In 1960 the minicomputer emerged, a smaller version of the Macrocomputer. Being task-oriented, it did not need all the peripherals a Mainframe needs, and this helped to reduce the price and maintenance costs. Minicomputers, in size and processing power, are located between mainframes and workstations. In general, a minicomputer is a multi-process system (several processes in parallel) capable of supporting 10 to 200 users simultaneously. They are currently used to store large databases, industrial automation, and multi-user applications. Microcomputers or PC´s

 microcomputers:

 Microcomputers or Personal Computers (PCs) had their origin with the creation of microprocessors. A microprocessor is "a computer in a chic", that is, an independent integrated circuit. PCs are computers for personal use and are relatively cheap and are currently found in offices, schools, and homes.

 The term PC is derived from the fact that for the year 1981, IBM® released its "IBM PC" model, which became an ideal type of computer for "personal" use, hence the term "PC "it was standardized and the clones that other companies later brought out were called" PC and compatible ", using processors of the same type as IBM, but at a lower cost and being able to run the same type of programs.

 There are other types of microcomputers, such as the Macintosh®, which are not compatible with IBM, but which in many cases are also called "PCs", because they are for personal use. Currently, there are several types of PC design: Personal computers, with the mini-tower type cabinet, separated from the monitor. "Laptop" or "Notebook" personal computers. Most common personal computers, with the horizontal cabinet, separate from the monitor. Personal computers that are in a single unit compact the monitor and the CPU.

 Laptops are computers that are designed to be transported from one place to another. They are powered by rechargeable batteries, weigh between 2 and 5 kilos and most have an integrated LCD screen (Liquid Crystal Display). Workstations or Workstations The workstations are between the Minicomputers and the macrocomputers (for the processing).

 Workstations are a type of computer that is used for applications that require moderate processing power and relatively high-quality graphics capabilities. They are used for: CAD (Computer-Aided Design) engineering applications CAM (Computer Aided Manufacturing) Advertising Networked Software Creation, the word "workstation" or "workstation" is used to refer to any computer that is connected to a local area network.

Hardware:

Entry
Processing
Secondary Storage
Exit

Hardware Definition:

 Hardware is all those physical components of a computer, everything visible and tangible. The hardware performs the 4 fundamental activities: input, processing, output and secondary storage. Input To enter data into the computer, different devices are used, for example, Keyboard The most commonly used input device found in all computer equipment. The keyboard is made up of 3 parts: function keys, alphanumeric keys, and numeric keys.

 Mouse:

 It is the second most commonly used input device. The mouse or mouse is dragged along a surface to maneuver a pointer on the monitor screen. It was invented by Douglas Engelbart and its name is derived from its shape which resembles that of a mouse.

 Optical pencil :

 This device is very similar to an ordinary pen but connected to an electrical cord and that requires special software. By making the pen touch the monitor the user can choose the commands of the programs.

 Digitizing tablet :

 It is a drawing surface with a marking medium that works like a pencil. The tablet converts the movements of this pointer into digitized data that can be read by certain computer packages. Sizes range from letter size to desktop.

 Voice input (voice recognition):

 They convert a person's voice output into digital signals. Most of these programs have to be "trained" to recognize the commands that the user gives verbally. Voice recognition is used in the medical profession to allow doctors to quickly compile reports. More than 300 Kurzweil Voicemed systems are currently installed in more than 200 Hospitals in the United States. This innovative voice recognition system uses speaker independence technology. This means that a computer does not have to be trained to recognize the language or tone of voice of a single person. You can recognize the same word spoken by various individuals.

Screens sensitive to the touch (Screen Touch):

 They allow you to give commands to the computer by touching certain parts of the screen. Very few software programs work with them and users complain that the screens are too far from the keyboard. Its acceptance has been very low. Some department stores employ this type of technology to help customers find goods or services within the store.

 Barcode readers These are scanners that read the vertical bars that make up a code. This is known as a Point of Sale (POS). Grocery stores use the Universal Product Code (CUP or UPC). This code identifies the product and at the same time makes the ticket discounted from inventory and will make a purchase order if necessary. Some readers are installed on a physical surface and others are operated manually.

 Scanners:


 They convert text, color or black and white photos to a form that can be read by a computer. Later this image can be modified, printed and saved. They are capable of digitizing a graphics page in a few seconds and provide a fast, easy and efficient way to enter printed information into a computer; You can also enter information if you have special software called OCR (Optical Character Recognition).

 Processing:


 The CPU (Central Processor Unit) is responsible for controlling the data flow (I / O Input and Output Activities) and for executing program instructions on the data. Perform all calculations (addition, subtraction, multiplication, division, and compare numbers and characters). It is the "brain" of the computer.

 It is divided into 3 Components

  1.Unit of Control (UC)

2.Arithmetic / Logical Unit (UAL)

3.Primary storage area (memory)

Control unit :

It is essentially the one that governs all the activities of the computer, just as the CPU is the brain of the computer, it can be said that the UC is the nucleus of the CPU. It supervises the execution of the programs. It coordinates and controls the computer system, that is, it coordinates I / O activities. It determines what instruction must be executed and makes the data requested by the instruction available. It determines where the data is stored and transfers it from the locations where it is stored. Once the instruction has been executed, the Control Unit must determine where it will put the result for output or for later use.

 Arithmetic / Logic Unit:


 This unit performs calculations (addition, subtraction, multiplication, and division) and logical operations (comparisons). It transfers data between storage positions. It has a very important record known as ACC Accumulator When performing arithmetic and logical operations, the UAL moves data between it and the storage. The data used in the processing is transferred from its storage location to the UAL. The data is manipulated according to the program's instructions and returned to storage. Since the processing cannot be carried out in the storage area, the data must be transferred to the UAL. To finish an operation it can happen that the data goes from the UAL to the storage area or several times.

 Primary storage area:


 Memory gives the processor temporary storage for programs and data. All programs and data must be transferred to memory from an input device or from secondary (floppy) storage before the programs can run or process the data. Computers use 2 types of primary memory: ROM (read-only memory), read-only memory, in which certain programs and information needed by the computer are stored, which are permanently recorded and cannot be modified by the programmer.

The basic instructions for starting a computer are recorded here and some notebooks have recorded spreadsheets, basic, etc. RAM (Random access memory), used by the user through its programs, is volatile. The unit's memory allows you to store input data, instructions of the programs that are currently running, the data resulting from the processing and the data that is prepared for output.

 The data provided to the computer remains in the primary storage until it is used in processing. During processing, the primary storage stores the intermediate and final data of all operations to rhythmic and logical. Primary storage must also store instructions for programs used in processing. The memory is subdivided into individual cells, each of which has a similar capacity to store data.

 Secondary Storage:


 Secondary storage is a definitive storage medium (not volatile like that of RAM). The process of transferring data to computer equipment is called a read procedure. The process of transferring data from the computer to storage is called the writing procedure. Currently, two technologies can be mainly used to store information:

 1.- Magnetic storage.

2.-Optical storage. Some devices combine both technologies.

 Magnetic Storage Devices:

 Magnetic Storage

 1.- Flexible Disks

2.- Hard Drives

3.- Magnetic Tapes or Cartridges.


Optical Storage:


 The need for greater storage capacities has led hardware manufacturers to a continuous search for alternative storage media and, when there are no options, to improve available technologies and develop new ones. Optical storage techniques make it possible to use precise location using laser beams.

 Reading information from an optical medium is a relatively easy task, writing it is another matter. The problem is the difficulty in modifying the surface of an optical medium since the optical media physically pierces the surface to reflect or scatter the laser light.

 The main optical storage devices are:

 1.- CD ROM.- CD Read Only Memory

2.- WORM.- Write Once, Read Many

 Magnetic - Optical Media:


 These media combine some of the best features of optical and magnetic recording technologies. A MO disc has the capacity of an optical disc but can be rewritten with the ease of a magnetic disc. They are currently available in various sizes and capacities. Exit

 The output devices of a computer are the hardware that is responsible for sending a response to the outside of the computer, such as monitors, printers, sound systems, modem. etc.

1.- Monitors:


 The monitor or video screen is the most common output device. There are some that are part of the body of the computer and others are separated from it. There are many ways to classify monitors, the basic one is in terms of their color capabilities, they can be: Monochromatic, they display only 2 colors, one for the background and one for the surface. Colors can be black and white, green and black, or amber and black. Grayscale, a grayscale monitor is a special type of monochrome monitor capable of displaying different shades of gray. Color: Color monitors can display 4 to 1 million different colors.

 As technology has advanced, different models have emerged: TTL, Monochromatic, very poor resolution, the former did not have the ability to graph. CGA, Color Graphics Adapter, displayed 4 colors, with very poor resolution compared to current monitors, now out of the market. EGA, Enhanced Graphics Adapter, handled a better resolution than the 640x350 pixel CGA. (Pixels are the points of light with which characters and graphics are formed on the monitor, the more pixels the better resolution). They displayed 64 colors. VGA, Video Graphics Array, there are monochrome and color. Suitable for the graphic environment due to its high resolution (640x480 pixels), they can reach up to 256,000 colors or 64 shades of gray depending on the memory allocated to the device. PVGA, Super Video Graphics Array, handles a higher resolution (1,024x768), the number of drop-down colors varies depending on memory but can be greater than 1 million colors.

 UVGA, Ultra Video Graphics Array, 1280 x 1024 Resolution. The quality of the images that a monitor can display is defined more by the capabilities of the Video Controller Card than by the capabilities of the monitor itself. The video controller is an intermediary device between the CPU and the monitor. The controller contains the memory and other electronic circuits necessary to send the information to the monitor for display on the screen.

2.- Printers:

The device that converts computer output into printed images. Printers can be divided into 2 types: impact printers and non-impact printers.


 IMPACT PRINTERS:


 A printer that uses a printing mechanism that impacts the character's image on a ribbon and on paper. Line, dot matrix, and daisy wheel printers are examples of impact printers. Dot Matrix Printer, is the most common printer. It has a movable print head with several dots or needles that when hitting the ink ribbon form characters by means of dots on the paper. The more needles the print head has, the better the quality of the result. There are 10 and 15 ", the speeds vary from 280 cps to 1,066 cps. Margarita printers; it has the same quality of a typewriter using a print disc that contains all the characters, they are out of the market for slow. Line Printers: These are high-speed printers that print one line at a time. They generally connect to large computers and minicomputers. Line printers print one line at a time from approximately 100 to 5000 LPM.

 PRINTERS WITHOUT IMPACT:


 They make the impression by different methods, but do not use the impact. They are less noisy and print quality is noticeably better than impact printers. The methods they use are the following: Thermal: They print in a similar way to the matrix machine, but the characters are formed by marking points by burning of a special paper. Vel. 80 cps. Faxes work with this method.

 Inkjet printer: It emits small jets of ink from disposable cartridges to the paper, there are colored ones. Vel. from 4 to 7 ppm. Electrophotographic or Laser: They create letters and graphics through a photocopying process. A laser beam traces the characters on a photosensitive drum, then attaches the toner to the paper using heat. Very high-resolution quality speeds from 4 to 18 ppm.

 Software :


Definition
Classification Operating Systems
S. Programming Languages
General use S. D e application

Software Definition:


 Software is the set of instructions that computers use to manipulate data. Without the software, the computer would be an unused set of media. By loading programs onto a computer, the machine will act as if it receives an instant education; suddenly "knows" how to think and how to operate. Software is a set of programs, documents, procedures, and routines associated with the operation of a computer system. Distinguishing itself from the physical components called hardware.

 Commonly, computer programs are called software; The software ensures that the program or system fully meets its objectives, operates efficiently, is properly documented, and simple enough to operate. It is simply the set of individual instructions provided to the microprocessor so that it can process the data and generate the expected results. The hardware alone cannot do anything, since it is necessary that the software exists, which is the set of instructions that make the hardware work.

 Software Ratings:


 The software is classified into 4 different categories: Operating Systems, Programming Languages, General Use Software, Application Software. (some authors consider the 3rd and 4th classification as one).

 Operating systems :


 The operating system is the manager and organizer of all the activities carried out by the computer. It marks the guidelines according to which information is exchanged between the central and external memory, and determines the elementary operations that the processor can perform. The operating system must be loaded into the central memory before any other information. Programming Languages ​​The programs indicate to the computer what task to perform and how to do it, but for this, it is necessary to enter these commands in a language that the system can understand. In principle, the computer only understands the instructions in machine code, that is, the specific one of the computer. However, from these the so-called high and low-level languages ​​are elaborated.

General Use Software:


 General-purpose software provides the framework for a large number of business, scientific, and personal applications. Spreadsheet, Computer-Aided Design (CAD), Word Processing, Database Management software belongs to this category. Most general-purpose software is sold as a package; that is, with user-oriented software and documentation (reference manuals, keyboard templates, and more).

 Application software:


 The application software is designed and written to perform a specific personal, business, or scientific tasks such as payroll processing, human resource management, or inventory control. All these applications are to process data (a receipt of materials) and generate information (payroll records). for the user. Operating Systems An Operating system (OS) is itself a computer program. However, it is a very special program, perhaps the most complex and important on a computer. The OS wakes up the computer and makes it recognize the CPU, memory, do key, video system, and disk drives. In addition, it provides the facility for users to communicate with the computer and serves as the platform from which to run application programs.

 When you turn on a computer, the first thing it does is perform a self-test called a Power On Self Test (POST). During POST, the computer identifies its memory, its disks, its keyboard, its video system and any other device connected to it. The next thing the computer does is search for an OS to boot.
Once the computer has started up its OS, it keeps at least part of it in its memory at all times. While the computer is on, the OS has 4 main tasks. 1. Provide either a command-line interface or a graphical user interface, so that the latter can communicate with the computer. Command-line interface: you enter words and symbols from the computer keyboard, for example, MS-DOS. Graphical User Interface (GUI), you select the actions by using a Mouse to click on figures called icons or select options from the menus. 2. Manage hardware devices on the computer.

 When programs run, they need to use memory, monitor, disk drives, In / Out ports (printers, modems, etc). The OS acts as an intermediary between the programs and the hardware. 3. Manage and maintain disk file systems · OSs group information into logical compartments for storage on disk. These informal groups are called archives. Files may contain program instructions or user-created information. The OS maintains a list of the files on a disk and provides us with the necessary tools to organize and manipulate these files. 4.Support other programs.

 Another important function of the OS is to provide services to other programs. These services are similar to those that the OS provides directly to users. For example, listing files, burning them to disk, deleting files, reviewing available space, etc. When programmers write computer programs, they include in their program instructions that request OS services. These instructions are known as "system calls"

The Kernel and the Shell:

The core functions of an OS are controlled by the kernel while the user interface is controlled by the environment. For example, the most important part of DOS is a program named "COMMAND.COM". This program has two parts. The kernel, which is kept in memory at all times, contains the low-level machine code to handle the hardware administration for other programs that need these services, and for the second part of COMMAND.COM the shell, which is the command interpreter.

 Low-level OS functions and command-interpreting functions are separate, so you can keep the DOS kernel running, but use a different user interface. This is exactly what happens when you load Microsoft Windows, which takes the place of the shell, replacing the command line interface with a graphical user interface. There are many different shells on the market, for example NDOS (Norton DOS), XTG, PC TOOLS, or even the same MS-DOS OS from version 5.0 included a Shell called DOS SHELL.

  Categories of Operating Systems MULTITAREA:

 The term multitasking refers to the ability of the OS to run more than one program at the same time. There are two schemes that operating system programs use to develop multitasking OS.

 The first requires cooperation between OS and application programs. The programs are written in such a way that they periodically check with the OS to see if any other program needs the CPU, if this is the case then they leave control of the CPU to the next program, this method is called cooperative multitasking and It is the method used by the OS of the Machintosh and DOS computers running Microsoft Windows.

 The second method is multitasking with prioritization. With this scheme, the OS maintains a list of processes (programs) that are running. When each process in the list starts, the OS assigns it a priority. At any time the OS can step in and change the priority of a process by effectively organizing the priority list, the OS also keeps track of the amount of time it spends with any process before going to the next. With multitasking prioritization, the OS can replace the running process at any time and reassign time to a higher priority task. Unix OS-2 and Windows NT employ this type of multitasking.

MULTI-USER:

A multi-user OS allows more than a single user to access a computer. Of course, to do this, the OS must also be capable of multitasking. Unix is ​​the most widely used Multi-User Operating System. Because Unix was originally designed to run on a minicomputer, it was multi-user and multitasking from its inception.

 PC versions of Unix are currently being produced such as The Santa Cruz Corporation Microport, Essex, IBM, and Sunsoft. Apple also produces a version of Unix for the Machintosh called: A / UX. Unix, Unix provides three ways to allow multiple people to use the same PC at the same time.

  1.Through modems.

2.Using terminal connection through serial ports

3. Through Networks.

 MULTIPROCESS:


 Computers that have more than one CPU are called multithreaded. A multithreaded operating system coordinates the operations of multiprocessor computers. Since each CPU in a multithreaded computer may be executing one instruction, the other processor is freed to process other instructions simultaneously. By using a computer with multithreaded capabilities we increase its response speed and processes.

 Almost all computers that have multithreaded capabilities offer a great advantage. The first Multiprocessing Operating Systems performed what is known as Asymmetric multiprocessing: The main CPU retains the global control of the computer, as well as that of the other processors. This was the first step towards multithreading but it was not the ideal direction to go as the main CPU could become a bottleneck. Symmetric multithreading: In asymmetric multithreading system, there is no single controller CPU. The barrier to overcome when implementing symmetric multithreading is that OSs have to be redesigned or designed from scratch to work in a multithreaded environment.

 Unix extensions, which support asymmetric multithreading are now available and symmetric extensions are becoming available. Microsoft's Windows NT supports symmetric multithreading.

Most common operating systems:

 MS-DOS

 It is the most common and popular of all PC Operating Systems. The reason for its continued popularity is due to the overwhelming volume of software available and the installed base of Intel-based computers. When Intel released the 80286, DOS became so popular and strong on the market that DOS and DOS applications accounted for the majority of the PC software market.

 At that time, IBM compatibility was a necessity for products to succeed, and "IBM compatibility" meant computers that ran DOS as well as IBM computers did. 80186 After the introduction of the Intel 80286 processor, IBM and Microsoft recognized the need to take advantage of the multitasking capabilities of this CPU. They teamed up to develop OS / 2, a modern multitasking OS for Intel microprocessors. <BR> However, society did not last long.

 The differences in technical opinions and IBM's perception of seeing Windows as a threat to OS / 2 caused a disagreement between the Companies that ultimately led to the dissolution of society. IBM continued the development and promotion of OS / 2. It is a single-user multitasking operating system that requires an Intel 286 microprocessor or better. In addition to multitasking, the great advantage of the OS / 2 platform is that it allows up to 16 MB of RAM to be handled directly (compared to 1 MB in the case of MS-DOS).


 On the other hand, OS / 2 is a very complex environment that requires up to 4 MB of RAM. OS / 2 users interact with the system through a graphical user interface called Presentation Manager. Despite the fact that OS / 2 breaks the 1 MB barrier of MS-DOS, it took time for it to become popular. Software vendors are reluctant to commit resources to create software.
VIRUS :

initialization sector virus
file infecting virus
troll horse
time bombs
mutants
 They are programs designed to multiply and pay without giving any indication of their existence. Electronic viruses can produce a variety of symptoms in their receptors. Some viruses multiply without causing obvious changes, malicious viruses can make strange noises or display bad taste messages on the screen. In extreme cases, they can erase files or hard drives.

  Viruses spread in several ways, some duplicate when an infected file is opened. Others infect the part of a hard drive that controls the part of the computer and then infect other disks that are bisected. A virus that has infected a disk will be able to spread to others that contain information such as programs.

 CLASSIFICATION OF VIRUSES:


 1. The initialization sector viruses: The initialization sector is the part of the hard disk that controls the start of the operating system when we turn on the cp.

 2. Infected viruses: Once this virus is activated, it will spread to all the program's files.

 3. Trojan horses: This virus masquerading as a legal program can harm your computer, files, or hard drive. Troll horses are the most capable of destroying files.

 4. time bombs: They remain hidden until the cp. Meet certain requirements such as the specific time and date.

 5. Mutants: These viruses change shape when going from one disk to another or from one file to another, it is difficult to detect and eradicate them.

WINDOWS:


 It is graphical physical work support that works with many applications designed specifically for it. Its main characteristics are the power of the applications so that users work in a simple and pleasant way. In the windows environment, the panel is referred to as if it were a desktop, the functions are presented in areas called windows.

Windows offers a taskbar in which the files that we have open are accommodated but that at a given moment would hinder us, windows consists of many windows.

 word:


 Microsoft Word is a program designed for the comfort of the user with many applications, in word text documents are handled. Microsoft Word requirements are as follows:

 1. Windows 3.1, win 95

2. 4 megabytes of ram (I recommend 8 megabytes)

3. 480 40 MH2 (recommended Pentium)

4. keyboard

5. mouse recommended

6. monitor

 In Word, there is an application that can copy and paste the documents that have been repeated, and bullets are another Microsoft Word application that helps us to correctly accommodate the data that requires it.

No comments

Powered by Blogger.