CPIT-201 | Introduction to computing

시작하기. 무료입니다
또는 회원 가입 e메일 주소
CPIT-201 | Introduction to computing 저자: Mind Map: CPIT-201 | Introduction to computing

1. CH-1 | Introduction

1.1. Turning model

1.1.1. described by Alan Turing in 1937

1.1.2. He proposed that all computation could be performed by (Turing machine)

1.2. Data processors

1.2.1. a computer acts as a black box that accepts

1.2.1.1. input data

1.2.1.2. processes the data

1.2.1.3. output data

1.3. Programmable data processors

1.3.1. the program

1.3.1.1. A program is a set of instructions that tells the computer what to do with data

1.3.1.2. Instructions are written in a computer language.

1.3.1.3. Output data depends on the input data and the program

1.4. VON NEUMANN Model

1.4.1. Intro

1.4.1.1. since program and data are logically the same, programs should also be stored in the memory of a computer.

1.4.2. divide the computer hardware into four subsystems:

1.4.2.1. memory

1.4.2.2. arithmetic logic unit

1.4.2.3. ontrol unit

1.4.2.4. input/output

1.4.3. The Stored Program Concept

1.4.3.1. early computers did not store the program in the computer’s memory,

1.4.3.2. von Neumann model states that the program must be stored in memory.

1.4.4. Figure

1.4.4.1. -Memory is the storage area where data and program is stored

1.4.4.2. -ALU is where calculation and logical operation take place.

1.4.4.3. -CU controls the operations of memory, ALU and I/O devices

1.4.4.4. -Input subsystem accepts input and the program from outside.

1.4.4.5. -Output subsystem sends the result of processing to outside,

1.4.5. Program and data in memory

1.4.5.1. Both the data and programs should have the same format because they are stored in memory as binary pattern

1.4.6. Sequential execution of instructions

1.4.6.1. Today’s computers execute programs in the order that is the most efficient.

1.5. Computer components

1.5.1. computer hardware

1.5.1.1. We will discuss computer hardware in more detail in Chapter 5.

1.5.2. data

1.5.2.1. von Neumann model clearly defines a computer as a data processing machine that accepts the input data, processes it, and outputs the result.

1.5.3. computer software.

1.6. history of computing

1.6.1. Mechanical machines (before 1930)

1.6.1.1. Pascaline

1.6.1.2. Leibnitz’ Wheel

1.6.1.3. Jacquard loom

1.6.2. electronic computers (1930–1950)

1.6.2.1. computers of this period did not store the program in memory—all were programmed externally.

1.6.2.2. Five computers were prominent during these years

1.6.2.2.1. ABC

1.6.2.2.2. Z1

1.6.2.2.3. Mark I.

1.6.2.2.4. Colossus

1.6.2.2.5. ENIAC

1.6.2.3. Computer generations (1950–present)

1.6.2.3.1. First generation 1950–1959

1.6.2.3.2. Second generation 1959–1965

1.6.2.3.3. Third generation 1965-1975

1.6.2.3.4. Fourth generation 1975–1985

1.6.2.3.5. Fifth generation 1985-now

1.7. Social and ethical issues

1.7.1. Social issues

1.7.1.1. Dependency

1.7.1.2. Social justice

1.7.1.3. Digital divide

1.7.2. Ethical issues

1.7.2.1. Privacy

1.7.2.2. Copyright

1.7.2.3. Computer crime

1.8. Computer science as a discipline

1.8.1. computer science has now divided into two broad categories:

1.8.1.1. systems areas

1.8.1.2. applications areas

2. CH-2 | Data Representation

2.1. Number system

2.1.1. Positional system

2.1.1.1. The decimal system (base 10)

2.1.1.1.1. S = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

2.1.1.2. The binary system (base 2)

2.1.1.2.1. S = {0, 1}

2.1.1.3. The hexadecimal system (base 16)

2.1.1.3.1. S = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F}

2.1.1.4. The octal system (base 8)

2.1.1.4.1. S = {0, 1, 2, 3, 4, 5, 6, 7}

2.1.1.5. Summery

2.1.2. Non-positional system

2.1.2.1. non-positional number systems are not used in computers

2.1.2.1.1. uses a limited number of symbols in which each symbol has a value

2.1.2.1.2. the position a symbol occupies in the number normally bears no relation to its value—the value of each symbol is fixed

2.2. Bit pattern

2.2.1. All data types are represented using universal format called bit pattern

2.2.2. A Bit (binary digit) is the smallest unit of data that can be stored in a computer; it is either 0 or 1

2.2.3. An electronic switch can represent a bit.

2.2.4. Data are coded when they enter the computer and decoded when they are represented to user.

2.3. Text

2.3.1. Each symbol can be represented by a bit pattern

2.3.2. TEXT such as BYTE can be represented as 4 bit pattern

2.4. Number

2.4.1. hexadecimal

2.4.1.1. (base 16)

2.4.1.2. 4-bit pattern

2.4.2. octal

2.4.2.1. (base 8)

2.4.2.2. 3-bit pattern

2.5. Image

2.5.1. Bitmap

2.5.1.1. Bitmap graphic method divides an image into pixels

2.5.2. Vector

2.5.2.1. image because an image represented using a mathematical formula

2.6. Audio

2.6.1. Sampling

2.6.1.1. Measure value of signal at equal intervals

2.6.2. Quantization

2.6.2.1. Assign a value from a set to a sample

2.6.3. Coding

2.6.3.1. Store converted binary pattern

2.7. Video

2.7.1. Representing the video is like combination in representing the Images and the Audio

3. CH-3 | Number Representation

3.1. Range of integers

3.1.1. Integers are whole numbers

3.1.1.1. (i.e. numbers without fraction).

3.1.1.2. An integer can be positive or negative ranges

3.1.1.3. No computer can store all the integers in this range because a computer does not have an infinite storage capability

3.1.1.4. the range of integers depends on the number of bits that a computer allocates to store an integer

3.2. Integer Representation

3.2.1. Unsigned representation

3.2.1.1. An unsigned integer is an integer that can never be negative

3.2.1.2. This means it represents positive integers.

3.2.1.3. No bit is allocated for the sign.

3.2.1.4. Range: 0 ….. (2^N – 1) where N is the number of bits allocated to represent an Integer.

3.2.1.5. What is overflow ?

3.2.1.5.1. Overflow is an error that occurs if one try to store a number that is not within the range defined by the allocation.

3.2.1.6. Applications of unsigned integers:

3.2.1.6.1. Counting

3.2.1.6.2. Addressing

3.2.1.6.3. storing other data types

3.2.2. Signed integers

3.2.2.1. Sign and Magnitude

3.2.2.1.1. There are two 0s in sign-and-magnitude representation: positive and negative.

3.2.2.1.2. the leftmost bit defines the sign of the integer

3.2.2.1.3. Applications

3.2.2.2. One’s Complement

3.2.2.2.1. There are two 0s in one’s complement representation: positive and negative.

3.2.2.2.2. In one’s complement representation, the leftmost bit defines the sign of the number

3.2.2.2.3. Applications:

3.2.2.3. Two’s Complement

3.2.2.3.1. There is only one zero in Two complement

3.2.2.3.2. Two’s complement is the most common, the most important, and the most widely used representation of integers today because it has only one representation for 0 and thus does not create confusion in calculation

3.2.2.3.3. In two’s complement representation, the leftmost bit defines the sign of the number

3.2.2.3.4. Applications:

3.3. Summary of integer representation

3.3.1. Sign and Magnitude and One’s complement

3.3.1.1. are not used to store signed numbers by computers today because there are two 0s which confuse programmers.

3.3.2. Sign and Magnitude method is used mainly for

3.3.2.1. applications that do not need operations on numbers

3.3.2.1.1. e.g. addition, subtraction,..)

3.3.2.1.2. For example, changing the analog signals to digital signals.

3.4. Excess system

3.4.1. This representation can be used to store positive and negative numbers but operations on them are very difficult

3.4.2. It is used mainly to store the exponential value of a fraction

3.4.3. It depends on a magic number 2^N-1 or 2^N-1 -1

3.4.4. The way of using Excess_127 is:

3.4.4.1. Decimal to binary

3.4.4.1.1. add 127 to the number then convert to binary

3.4.4.2. Binary to decimal

3.4.4.2.1. convert binary to decimal then subtract 127

3.5. Floating point representation

3.5.1. A floating point number contains an integer part and a fraction part.

3.5.2. Integer part is converted to binary as usual but fraction part needs special treatment.

3.5.3. Normalization

3.5.3.1. Normalization means moving the decimal point so that there is only 1 to the left of the decimal point.

3.5.3.2. To indicate the original value of the number, multiply the number by 2^e

3.5.3.2.1. where e is the number of bits that the decimal Points moved;

4. Operations on Bits

4.1. Arithmetic

4.1.1. Adding numbers in 2’s Complement is like adding numbers in decimal; one add column by column, and if there is A carry, it is added to the next column.

4.2. Logical

4.2.1. Bit values 0 and 1 can be interpreted as logical values such as

4.2.1.1. true for 1

4.2.1.2. false for 0.

4.2.2. Unary and binary Logical operations can be applied on logical values represented by bits.

4.2.3. Logical operator

4.2.3.1. Truth table

4.2.3.2. Unary

4.2.3.2.1. NOT

4.2.3.3. Binary

4.2.3.3.1. AND

4.2.3.3.2. OR

4.2.3.3.3. XOR

4.2.4. Mask

4.2.4.1. The AND, OR, and XOR can be used to modify (set or unset) a bit pattern.

4.2.4.2. The mask is used to modify another bit pattern.

4.2.4.3. Example of (unsetting) specific bits

4.2.4.3.1. Use AND operator to unset (forces to 0) bits.

4.2.4.3.2. To unset a bit in the target, use 0 in the corresponding bit in the mask.

4.2.4.3.3. Otherwise, to leave a bit unchanged, use 1 in the corresponding bit in the mask.

4.2.4.4. Example of (setting) specific bits

4.2.4.4.1. Use OR operator to set (forces to 1) bits.

4.2.4.4.2. To set a bit in the target, use 1 in the corresponding bit in the mask.

4.2.4.4.3. Otherwise, to leave a bit unchanged, use 0 in the corresponding bit in the mask.

4.2.4.5. Example of flipping specific bits

4.2.4.5.1. Use XOR operator to flip bits (0 become1 and 1 becomes 0)

4.2.4.5.2. To flip a bit in the target, use 1 in the corresponding bit in the mask

4.2.4.5.3. Otherwise, to leave a bit unchanged, use 0 in the corresponding bit in the mask.