Tuesday, February 5, 2013

Memory

From Intel spec

http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/3rd-gen-core-desktop-vol-1-datasheet.pdf


 Data Scrambling

The memory controller incorporates a DDR3 Data Scrambling feature to minimize the
impact of excessive di/dt on the platform DDR3 VRs due to successive 1s and 0s on the
data bus. Past experience has demonstrated that traffic on the data bus is not random.
Rather, it can have energy concentrated at specific spectral harmonics creating high
di/dt that is generally limited by data patterns that excite resonance between the
package inductance and on die capacitances. As a result the memory controller uses a
data scrambling feature to create pseudo-random patterns on the DDR3 data bus to
reduce the impact of any excessive di/dt.

Interleaving or Dual-Channel Symmetric mode,

Dual-Channel Symmetric mode, also known as interleaved mode, provides maximum 
performance on real world applications. Addresses are ping-ponged between the 
channels after each cache line (64-byte boundary). If there are two requests, and the 
second request is to an address on the opposite channel from the first, that request can 
be sent before data from the first request has returned. If two consecutive cache lines 
are requested, both may be retrieved simultaneously, since they are ensured to be on 
opposite channels. Use Dual-Channel Symmetric mode when both Channel A and 
Channel B DIMM connectors are populated in any order, with the total amount of 
memory in each channel being the same.

When both channels are populated with the same memory capacity and the boundary 
between the dual channel zone and the single channel zone is the top of memory, the 
IMC operates completely in Dual-Channel Symmetric mode.

Thursday, September 22, 2011

Symmetric processor Vs Asymmetric processor

ASMP


In asymmetric multiprocessing the program tasks (or threads) are strictly divided by type
In ASMP each processor has its own Memory and IO which other processor can not access.

So it is not possible to implement multitasking.

Example
The PS/2 Server 195 and Server 295

SMP :

Most of processor we use now are SMP processor, where all processors share the IO and Memory.
Please refer the following link
http://ohlandl.ipv7.net/CPU/ASMP_SMP.html

Thanks
Suresh


Monday, August 8, 2011

Everything About DRAM

Please refer the following article


http://www.anandtech.com/show/3851/everything-you-always-wanted-to-know-about-sdram-memory-but-were-afraid-to-ask/1


Some Notes
Row-Column (or Command) Delay tRCD:



 The time to activate a bank is called the Row-Column (or Command) Delay and is denoted by the symbol tRCD. 

   This variable represents the minimum time needed to latch the command at the command interface, program the control logic, and read the data from the memory array into the Sense Amplifiers in preparation for column-level access.
 
Column Address Strobe  Latency tCAS.
The time to read a byte of data from the open page is called the Column Address Strobe (CAS) Latency and is denoted by the symbol CL or tCAS.

This variable represents the minimum time needed to latch the command at the command interface, program the control logic, gate the requested data from the Sense Amps into the Input/Output (I/O) Buffers, through a process known as pre-fetching, and place the first word of data on the Memory Bus.

 Read to Precharge Delay (tRTP).

The time to Precharge an open bank is called the Row Access Strobe (RAS) Precharge Delay and is denoted by the symbol tRP

 Read-to-Read Delay (tRRD).
The minimum time interval between ACT commands to different banks is the Read-to-Read Delay (tRRD).
  
Sequential reads to the same page make these types of transactions even more profitable as each successive access can be scheduled at a minimum of tBurst (4T) clocks from the last. The timing is captured as the CAS-to-CAS Delay (tCCD) and is commonly referred to as 'Back-to-Back CAS Delay' (B2B), as shown per Figure 7. This feature makes possible extremely high data transfer rates for total burst lengths of one page or less



Suresh

Tuesday, February 15, 2011

What is the difference between DMA and Bus Mastering

From http://www.digitalprosound.com/Htm/TechStuff/June/SCSI-Part3_3.htm

What is the difference between regular DMA and bus mastering?


 Plenty! First, let's look at bus mastering again but from a DMA point of view. A bus is a data transport. Bus mastering is a very advanced means of transporting data to and from devices and/or memory using the PCI bus as a conduit. A device that issues read and write operations to memory and/or I/O slave devices is considered the master, although a master device can have slave memory and/or I/O ports available to be accessed by other masters. For example, an Ethernet controller must convey data it receives from over the LAN and must also access data to send over the LAN as a bus master, but acts as a slave when the CPU, acting as a master, programs it to initialize and to specify where it must get and put data.


 Only one bus master can own, or "drive" the bus at a given instant, and the bus is responsible for arbitrating bus master requests from the various bus master devices. A bus master device will request access to the bus, which is granted immediately providing no other master has it at the moment. If another master device has been granted access, the new one must wait until the first one completes its single or burst transfer, or the bus arbiter times out and yanks the access away in favor of the new requesting master, which ever happens first. If an operation is interrupted by a timeout, it is resumed when that issuing master receives its turn again. The CPU is a bus master device, and is always present


The Intel PIIX family of IDE controllers found in all modern Intel chipsets for the x86 family are bus master devices. The SoundBlaster Live! is a bus master device that accesses main memory through the bus to read samples. There are many peripherals which use bus mastering on the PCI bus to free the CPU from actually doing every transfer, for example, video cards, network cards, SCSI controllers, other storage devices, and so on. Note that bus mastering transfers do not require and therefore do not tie up the DMA channels like normal DMA devices do.

Normal DMA is controlled by a chip. The DMA chip itself is a bus master device. It can be programmed by the CPU to perform transfers from memory to I/O, or I/O to memory (some also allow memory to memory, but is not in the case with the PC, although two DMA channels can be used to do that given some fancy driver footwork). Therefore, the DMA system acts as a bus master to perform the programmed operation while the CPU can be doing something else. The DMA controller sends a signal to the CPU when the transfer is complete. DMA is used to perform transfers without CPU intervention to or from peripherals that don't have bus master capabilities. DMA issues accesses similar to standard bus I/O accesses, but with the addition of handshaking lines DMA_Request and DMA_Acknowledge. These signals are present on the bus for each DMA channel. A slave device must handle these handshaking lines to be able to be operated through DMA. Obviously, this is a much simpler system than having to support all the complex and necessary logic in a bus master device. The main limitations of a DMA capable slave compared with a bus master peripheral, are:

1) The DMA slave is passive. It is the CPU which must specify the transfers to be done. The bus master device can perform transfers by its own initiative without restrictions.
2) DMA can only transfer blocks of contiguous memory content, and only one block for each programmed transaction. The bus masters can access memory or I/O following any pattern without restriction.
3) In the case of the PC, the DMA device can only transfer blocks of up to 64 KBytes, and always on 64 KByte boundaries, which limits its utility. In older PCs, the DMA system could only access the first megabyte of memory. Later it was extended to the first 16 megabytes and currently the DMA device can more often access all memory, but always within 64 KByte boundaries for each operation.
4) DMA is generally slower, although there are new faster modes and burst timing modes achieving considerable throughputs. These modes must be specifically supported by the slaves in order to used them. The original Intel 8237 DMA controller was extremely slow. So slow that disk transfers were more efficiently done by the CPU using PIO mode 4 because DMA would become the bottleneck. In the best theoretical case (that was never meet) it could only transfer 4 MB/s. The reality was more like 1 MB/s.



அதாவது என்ன வித்தியாசம் ன,   A PCI Device , BUS Master Mode ல PCI BUS ah பயன்படுத்தி , எப்பெல்லாம் கெடைக்குதோ(PCI arbitration ல ஜெய்ச்சதுக்கு அப்புறம் ) அப்ப transfer பன்னிக்கும் ..  


ஆனா , DMA  என்பது தனி CHIP இதை பயன்படுத்தி CPU எங்கெல்லாம் படிக்க / எழுத நினைக்குதோ அங்க Transfer ah initiate  செய்யும் .. 


~Suresh 





Friday, January 14, 2011

Basic C

If you want to know about  how a  C program works in Linux, then Please refer the following link.
Its well written article and very useful.
http://blog.ksplice.com/2010/03/libc-free-world/
http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html

C நிரல் மொழியில் எழுதப்பட்டுள்ள program  எப்படி லினக்ஸ் இயங்குதளத்தில் செயல்படுகிறது என்பதை அறிய மேலே கொடுக்கபட்டுள்ள  இணைப்பை சொடுக்கவும் ..


~Suresh