What is “Input Queue” ?

Collection of processes on the disk that are waiting to be brought into memory to run the program.
When we want to put a process into our main memory, we use Input Queue and pick a process from it.

The multistep processing of a user programming

we use “Address” to detect those processes, but how? Is the address given by CPU same as the address in memory?

Absolutely, no. We have two kinds of address:

  1. Logical address(Virtual address): Is the address at which an item (memory cell, storage element, network host) appears to reside from the perspective of an executing application program.
  2. Physical address: Physical Address is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device.

Overlay(“オーバーレイ”) and Paging(“ページング”) in virtual memory(仮想記憶)

Target: Enable a process to be larger than the amount of memory allocated to it

  • Keep in memory only those instructions and data that are needed at any given time.
  • Implemented by user, no special support needed from operating system, programming design of overlay structure is complex.

What is Symbol Table(シンボルテーブル):

  • Δ Symbol table is a data structure used by a language translator such as a compiler or interpreter, where each identifier (a.k.a. symbol) in a program’s source code is associated with information relating to its declaration or appearance in the source.

What is “Contiguous Allocation”(“連続メモリ割り当て”)?

  • Each process is contained in a single contiguous section of memory, this mechanism is called “Contiguous Allocation”(“連続メモリ割り当て”).
  • There are two partitions in the main memory:
    1. Resident operating system, usually held in low memory with interrupt vector(“割り込みベクタ”).
    2. User processes then held in high memory.

Memory is usually divided into two areas, one for the operating system and the other for user processes. The operating system can be located in low memory or high memory. The main factor affecting this decision is the location of the interrupt vector. Because the interrupt vector is usually located in low memory, programmers usually put the operating system page in low memory.

However, because those allocation is contiguous, how can we protect one process from one another ?

Memory Protection:

  • Relocation Register: which contains value of smallest physical address.
  • Limit Register: which contains range of logical addresses.
  • Under the joint action of these two mechanisms, the MMU maps the logical address dynamically by adding the value in the relocation register, and each logical address must be less than the limit register (Limit Register can control the memory range of each process)


重定位寄存器含有最小的物理地址值,界限地址寄存器含有逻辑地址的范围值(例如:重定位=100050, 界限=74500)。有了重定位寄存器和界限地址寄存器,每个逻辑地址必须小于界限地址寄存器。MMU动态的将逻辑地址加上重定位寄存器的值后映射成物理地址,映射后的物理地址再交送内存单元。

Multiple-partition Allocation:

  • Hole: block of available memory;Holes of various size are scattered throughout memory.
    When a process arrives, it is allocated memory from a hole large enough to accommodate it. OS maintains information about:

    1. allocated partitions
    2. free partitions (hole)
  • How to satisfy a request of size n from a list of free holes?

    1. First-fit: Allocate the first hole that is big enough.
    2. Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole.
    3. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.
  • Fragmentation (“フラグメンテーション”) :

    1. External Fragmentation(“外部断片化”): total memory space exists to satisfy a request, but it is not contiguous.
    2. Internal Fragmentation(“内部断片化”): allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.

Reduce external fragmentation by memory compaction:

メモリの断片化を解消する機能はコンパクション(memory compaction)と呼ばれ、実現方法によってはガベージコレクションと共にコンパクションも行う仕組みになっている。そのためコンパクションを含めてガベージコレクションと呼ぶ場合もあるが、厳密には区別される。

Shuffle memory contents to place all free memory together in one large block. Compaction is possible only if relocation is dynamic (“動的再配置”) , and is done at execution time.

Compaction is one way to solve this problem, but there is another way is Paging, which is a classical method in modern OS.


Background of “Paging”

  • Divide physical memory (“物理アドレス”) into fixed-sized blocks called Frames(“フレイム”) (size is power of 2, between 512 bytes and 8192 bytes).
  • Divide logical memory (“仮想/論理アドレス”) into blocks of same size called Pages(“ページ”).

Address generated by CPU is divided into:

  • Page number (p) (“ページ番号”): used as an index into a page table which contains base address of each page in physical memory.
  • Page offset (d) (“オフセット”): combined with base address to define the physical memory address that is sent to the memory unit.


Page 0 => page table (0,5) => Frame 5
Page 1 => page table (1,6) => Frame 6
… …

“a” => (number, offset) = (0, 0) => page table (0,5) => [Frame 5, +0] = 4*5 + 0

“f” => (number, offset) = (1, 1) => page table (1,6) => [Frame 6, +1] = 4*5 + 1

… …

Paging is a form of dynamic relocation.
Every logical address is bound by the paging hardware to some physical address. Using paging is similar to using a table of base (or relocation) registers, one for each frame of memory.

Use “Paging” to solve the fragmentation problem (External Fragmentation)

Using paging scheme, there is no external fragmentation, but may be some internal fragmentation, because each page-size is usually 4 bytes long, and process may be leave some space.

Implementation of Page Table

Page table is kept in main memory.

  • Page-table base register (PTBR) points to the page table.
  • Page-table length register (PTLR) indicates size of the page table.

What is Hierarchical Page Tables?

Most modern computer systems support a large logical address space, in which the page table becomes excessively large.
So we would not want to allocate the page table contiguously in main memory, we break up the logical address space into multiple page tables.


Feature of Paging

An important aspect of paging is the clear separation between the user’s view of memory and the actual physical memory:

The difference between the user’s view of memory and the actual physical memory is reconciled by the address-translation hardware.


What is Segmentation

Memory segmentation is a computer (primary) memory management technique of division of a computer’s primary memory into segments or sections.
In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset (memory location) within that segment.


The difference between Page and Segmentation

  • A page is of fixed block size, and segmentation is of variable size.
  • Page may lead to internal fragmentation, segmentation may lead to external fragmentation

Segmentation table (different from page table)

Every segmentation has a Segment-table base register(STBR) and Segment-table length register(STLR).

Virtual Memory(仮想記憶)

What is virtual memory?

Virtual memory is a technology of memory management in computer system. It makes the application think that it has continuous available memory (a continuous and complete address space), but in fact, it is usually divided into multiple physical memory fragments, and some of them are temporarily stored on the external disk storage for data exchange when necessary.
Compared with the system without virtual memory technology, the system using this technology makes the writing of large programs easier and the use of real physical memory (such as RAM) more efficient.

Demand Paging(デマンドページング): Bring a page into memory only when it is need.


When there is no free frame: Page Replacement(ページ置換):

  • What’s that mean?
    Find some page in memory, but not really in use, swap it out.
  • Prevent Page Fault(“ページフォールト”):

    Prevent over-allocation of memory by modifying page-fault service routine to include page replacement.
    Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory.
  • Of course, we want lowest page fault rate.
  • Paging Replacement Algorithm: FIFO, Least Recently Used, Counting (including LFU, MFU)

Benefits of virtual memory:

  • Copy-on-write:
    1. Allows both parent and child process to initially share the same pages in memory.
    2. If either process modifies a shared page, it will modify a new page copied from the shared page.
  • memory -mapped file:
    1. Allows file I/O to be treated as routine memory access by mapping a disk block to a page in memory.
    2. A file is initially read using demand paging. A page-sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated as ordinary memory accesses.

Allocate the frame


  • Fixed allocation:
    If 100 frames and 5 processes, give each process 20 pages.
  • Priority allocation:
    According to the size of process. select for replacement a frame from a process with lower priority number.
  • Global allocation:
    One process can take a frame from another.
    If another process has a old frame, you can replace that frame, but local allocate could only check it’s own frames’ status.


A process is busy swapping pages in and out. It will lead to:

  • Low CPU utilization.
  • OS thinks that it needs to increase the degree of multiprogramming. another process added to the system.


VCVina copyrights this specification. No part of this specification may be reproduced in any form or means, without the prior written consent of VCVina.

当ウェブサイト上のコンテンツ(ブログ)の著作権は、特別の断りが無い限り、ブログや画像の原作者/会社が保有します。 原作者から事前承諾を得ることなく、これらの情報を使用(複製、送信、頒布、改変、販売、出版、転用、掲示等)することはできません、ご了承ください。

中文翻译: 详细的PCA解析 上一篇
中日英专业术语对照 下一篇