Pages

Saturday, June 9, 2012

History of general purpose CPUs

%3Cdiv+dir%3D%22ltr%22+style%3D%22text-align%3A+left%3B%22+trbidi%3D%22on%22%3E%0D%0A%3Ctable+class%3D%22metadata+plainlinks+ambox+ambox-content+ambox-Refimprove%22%3E%3Ctbody%3E%0D%0A%3Ctr%3E%3Ctd+class%3D%22mbox-image%22%3E%3Cdiv+style%3D%22width%3A+52px%3B%22%3E%0D%0A%3Cimg+alt%3D%22%22+height%3D%2239%22+src%3D%22http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fen%2Fthumb%2F9%2F99%2FQuestion_book-new.svg%2F50px-Question_book-new.svg.png%22+width%3D%2250%22+%2F%3E%3C%2Fdiv%3E%0D%0A%3C%2Ftd%3E+%3Ctd+class%3D%22mbox-text%22%3EThis+article+%3Cb%3Eneeds+additional+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3ACiting_sources%23Inline_citations%22+title%3D%22Wikipedia%3ACiting+sources%22%3Ecitations%3C%2Fa%3E+for+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3AVerifiability%22+title%3D%22Wikipedia%3AVerifiability%22%3Everification%3C%2Fa%3E%3C%2Fb%3E.+Please+help+%3Ca+class%3D%22external+text%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DHistory_of_general_purpose_CPUs%26amp%3Baction%3Dedit%22%3Eimprove+this+article%3C%2Fa%3E+by+adding+citations+to+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3AIdentifying_reliable_sources%22+title%3D%22Wikipedia%3AIdentifying+reliable+sources%22%3Ereliable+sources%3C%2Fa%3E.+Unsourced+material+may+be+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTemplate%3ACitation_needed%22+title%3D%22Template%3ACitation+needed%22%3Echallenged%3C%2Fa%3E+and+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3AVerifiability%23Burden_of_evidence%22+title%3D%22Wikipedia%3AVerifiability%22%3Eremoved%3C%2Fa%3E.+%3Csmall%3E%3Ci%3E%28March+2009%29%3C%2Fi%3E%3C%2Fsmall%3E%3C%2Ftd%3E+%3C%2Ftr%3E%0D%0A%3C%2Ftbody%3E%3C%2Ftable%3E%0D%0A%3Ctable+class%3D%22metadata+plainlinks+ambox+ambox-content+ambox-Original_research%22%3E%3Ctbody%3E%0D%0A%3Ctr%3E%3Ctd+class%3D%22mbox-image%22%3E+%3Cdiv+style%3D%22width%3A+52px%3B%22%3E%0D%0A%3Cimg+alt%3D%22%22+height%3D%2240%22+src%3D%22http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fen%2Ff%2Ff4%2FAmbox_content.png%22+width%3D%2240%22+%2F%3E%3C%2Fdiv%3E%0D%0A%3C%2Ftd%3E+%3Ctd+class%3D%22mbox-text%22%3EThis+article+%3Cb%3Emay+contain+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3ANo_original_research%22+title%3D%22Wikipedia%3ANo+original+research%22%3Eoriginal+research%3C%2Fa%3E%3C%2Fb%3E.+Please+%3Ca+class%3D%22external+text%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DHistory_of_general_purpose_CPUs%26amp%3Baction%3Dedit%22%3Eimprove+it%3C%2Fa%3E+by+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3AVerifiability%22+title%3D%22Wikipedia%3AVerifiability%22%3Everifying%3C%2Fa%3E+the+claims+made+and+adding+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FWikipedia%3AReferences%22+title%3D%22Wikipedia%3AReferences%22%3Ereferences%3C%2Fa%3E.+Statements+consisting+only+of+original+research+may+be+removed.+More+details+may+be+available+on+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTalk%3AHistory_of_general_purpose_CPUs%22+title%3D%22Talk%3AHistory+of+general+purpose+CPUs%22%3Etalk+page%3C%2Fa%3E.+%3Csmall%3E%3Ci%3E%28March+2009%29%3C%2Fi%3E%3C%2Fsmall%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%3C%2Ftbody%3E%3C%2Ftable%3E%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%221950s%3A_early_designs%22%3E1950s%3A+early+designs%3C%2Fspan%3E%3C%2Fh2%3E%0D%0AEach+of+the+computer+designs+of+the+early+1950s+was+a+unique+design%3B++there+were+no+upward-compatible+machines+or+computer+architectures+with++multiple%2C+differing+implementations.+Programs+written+for+one+machine++would+not+run+on+another+kind%2C+even+other+kinds+from+the+same+company.++This+was+not+a+major+drawback+at+the+time+because+there+was+not+a+large++body+of+software+developed+to+run+on+computers%2C+so+starting+programming++from+scratch+was+not+seen+as+a+large+barrier.%0D%0AThe+design+freedom+of+the+time+was+very+important%2C+for+designers+were++very+constrained+by+the+cost+of+electronics%2C+yet+just+beginning+to++explore+how+a+computer+could+best+be+organized.+Some+of+the+basic++features+introduced+during+this+period+included+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIndex_registers%22+title%3D%22Index+registers%22%3Eindex+registers%3C%2Fa%3E+%28on+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FFerranti_Mark_1%22+title%3D%22Ferranti+Mark+1%22%3EFerranti+Mark+1%3C%2Fa%3E%29%2C+a+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FReturn_address%22+title%3D%22Return+address%22%3Ereturn-address%3C%2Fa%3E+saving+instruction+%28%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FUNIVAC_I%22+title%3D%22UNIVAC+I%22%3EUNIVAC+I%3C%2Fa%3E%29%2C+immediate+operands+%28%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_704%22+title%3D%22IBM+704%22%3EIBM+704%3C%2Fa%3E%29%2C+and+the+detection+of+invalid+operations+%28%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_650%22+title%3D%22IBM+650%22%3EIBM+650%3C%2Fa%3E%29.%0D%0ABy+the+end+of+the+1950s+commercial+builders+had+developed++factory-constructed%2C+truck-deliverable+computers.+The+most+widely++installed+computer+was+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_650%22+title%3D%22IBM+650%22%3EIBM+650%3C%2Fa%3E%2C+which+used+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FDrum_memory%22+title%3D%22Drum+memory%22%3Edrum+memory%3C%2Fa%3E+onto+which+programs+were+loaded+using+either+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPunched_tape%22+title%3D%22Punched+tape%22%3Epaper+tape%3C%2Fa%3E+or+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPunched_card%22+title%3D%22Punched+card%22%3Epunched+cards%3C%2Fa%3E.+Some+very+high-end+machines+also+included+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCore_memory%22+title%3D%22Core+memory%22%3Ecore+memory%3C%2Fa%3E+which+provided+higher+speeds.+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHard_disk%22+title%3D%22Hard+disk%22%3EHard+disks%3C%2Fa%3E+were+also+starting+to+become+popular.%0D%0AA+computer+is+an+automatic+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAbacus%22+title%3D%22Abacus%22%3Eabacus%3C%2Fa%3E.++The+type+of+number+system+affects+the+way+it+works.+In+the+early+1950s++most+computers+were+built+for+specific+numerical+processing+tasks%2C+and++many+machines+used+decimal+numbers+as+their+basic+number+system+%E2%80%93+that++is%2C+the+mathematical+functions+of+the+machines+worked+in+base-10+instead++of+base-2+as+is+common+today.+These+were+not+merely+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBinary_coded_decimal%22+title%3D%22Binary+coded+decimal%22%3Ebinary+coded+decimal%3C%2Fa%3E.+Most+machines+actually+had+ten+vacuum+tubes+per+digit+in+each+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FProcessor_register%22+title%3D%22Processor+register%22%3Eregister%3C%2Fa%3E.+Some+early+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSoviet_Union%22+title%3D%22Soviet+Union%22%3ESoviet%3C%2Fa%3E+computer+designers+implemented+systems+based+on+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTernary_logic%22+title%3D%22Ternary+logic%22%3Eternary+logic%3C%2Fa%3E%3B+that+is%2C+a+bit+could+have+three+states%3A+%2B1%2C+0%2C+or+-1%2C+corresponding+to+positive%2C+zero%2C+or+negative+voltage.%0D%0AAn+early+project+for+the+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FU.S._Air_Force%22+title%3D%22U.S.+Air+Force%22%3EU.S.+Air+Force%3C%2Fa%3E%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBINAC%22+title%3D%22BINAC%22%3EBINAC%3C%2Fa%3E+attempted+to+make+a+lightweight%2C+simple+computer+by+using+binary+arithmetic.+It+deeply+impressed+the+industry.%0D%0AAs+late+as+1970%2C+major+computer+languages+were+unable+to+standardize++their+numeric+behavior+because+decimal+computers+had+groups+of+users+too++large+to+alienate.%0D%0AEven+when+designers+used+a+binary+system%2C+they+still+had+many+odd+ideas.+Some+used+sign-magnitude+arithmetic+%28-1+%3D+10001%29%2C+or+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FOnes%2527_complement%22+title%3D%22Ones%27+complement%22%3Eones%27+complement%3C%2Fa%3E+%28-1+%3D+11110%29%2C+rather+than+modern+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTwo%2527s_complement%22+title%3D%22Two%27s+complement%22%3Etwo%27s+complement%3C%2Fa%3E+arithmetic+%28-1+%3D+11111%29.+Most+computers+used+six-bit+character+sets%2C+because+they+adequately+encoded+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHollerith%22+title%3D%22Hollerith%22%3EHollerith%3C%2Fa%3E++cards.+It+was+a+major+revelation+to+designers+of+this+period+to+realize++that+the+data+word+should+be+a+multiple+of+the+character+size.+They++began+to+design+computers+with+12%2C+24+and+36+bit+data+words+%28e.g.+see++the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTX-2%22+title%3D%22TX-2%22%3ETX-2%3C%2Fa%3E%29.%0D%0AIn+this+era%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FGrosch%2527s_law%22+title%3D%22Grosch%27s+law%22%3EGrosch%27s+law%3C%2Fa%3E+dominated+computer+design%3A+Computer+cost+increased+as+the+square+of+its+speed.%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%221960s%3A_the_computer_revolution_and_CISC%22%3E1960s%3A+the+computer+revolution+and+CISC%3C%2Fspan%3E%3C%2Fh2%3E%0D%0AOne+major+problem+with+early+computers+was+that+a+program+for+one++would+not+work+on+others.+Computer+companies+found+that+their+customers++had+little+reason+to+remain+loyal+to+a+particular+brand%2C+as+the+next++computer+they+purchased+would+be+incompatible+anyway.+At+that+point%2C++price+and+performance+were+usually+the+only+concerns.%0D%0AIn+1962%2C+IBM+tried+a+new+approach+to+designing+computers.+The+plan++was+to+make+an+entire+family+of+computers+that+could+all+run+the+same++software%2C+but+with+different+performances%2C+and+at+different+prices.+As++users%27+requirements+grew+they+could+move+up+to+larger+computers%2C+and++still+keep+all+of+their+investment+in+programs%2C+data+and+storage+media.%0D%0AIn+order+to+do+this+they+designed+a+single+%3Ci%3Ereference+computer%3C%2Fi%3E+called+the+%3Cb%3E%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSystem%2F360%22+title%3D%22System%2F360%22%3ESystem%2F360%3C%2Fa%3E%3C%2Fb%3E+%28or+%3Cb%3ES%2F360%3C%2Fb%3E%29.++The+System%2F360+was+a+virtual+computer%2C+a+reference+instruction+set+and++capabilities+that+all+machines+in+the+family+would+support.+In+order+to++provide+different+classes+of+machines%2C+each+computer+in+the+family+would++use+more+or+less+hardware+emulation%2C+and+more+or+less+%3Ca+class%3D%22mw-redirect2%29%2C+a+term+not+invented+until+many+years++later%2C+when+RISC+%28Reduced+Instruction+Set+Computer%29+begun+to+get+market++share.%0D%0AIn+many+CISCs%2C+an+instruction+could+access+either+registers+or++memory%2C+usually+in+several+different+ways.+This+made+the+CISCs+easier+to++program%2C+because+a+programmer+could+remember+just+thirty+to+a+hundred++instructions%2C+and+a+set+of+three+to+ten+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAddressing_mode%22+title%3D%2mulator+to+provide+the+rest+of+the+instruction+set%2C+which+would+slow+it++down.+A+high-end+machine+would+use+a+much+more+complex+processor+that++could+directly+process+more+of+the+System%2F360+design%2C+thus+running+a++much+simpler+and+faster+emulator.%0D%0AIBM+chose+to+make+the+reference+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FInstruction_set%22+title%3D%22Instruction+set%22%3Einstruction+set%3C%2Fa%3E+quite+complex%2C+and+very+capable.+This+was+a+conscious+choice.+Even+though+the+computer+was+complex%2C+its+%22%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FControl_store%22+title%3D%22Control+store%22%3Econtrol+store%3C%2Fa%3E%22+containing+the+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMicroprogram%22+title%3D%22Microprogram%22%3Emicroprogram%3C%2Fa%3E++would+stay+relatively+small%2C+and+could+be+made+with+very+fast+memory.++Another+important+effect+was+that+a+single+instruction+could+describe++quite+a+complex+sequence+of+operations.+Thus+the+computers+would++generally+have+to+fetch+fewer+instructions+from+the+main+memory%2C+which++could+be+made+slower%2C+smaller+and+less+expensive+for+a+given+combination++of+speed+and+price.%0D%0AAs+the+S%2F360+was+to+be+a+successor+to+both+scientific+machines+like+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_7090%22+title%3D%22IBM+7090%22%3E7090%3C%2Fa%3E+and+data+processing+machines+like+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_1401%22+title%3D%22IBM+1401%22%3E1401%3C%2Fa%3E%2C++it+needed+a+design+that+could+reasonably+support+all+forms+of++processing.+Hence+the+instruction+set+was+designed+to+manipulate+not++just+simple+binary+numbers%2C+but+text%2C+scientific+floating-point+%28similar++to+the+numbers+used+in+a+calculator%29%2C+and+the+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBinary_coded_decimal%22+title%3D%22Binary+coded+decimal%22%3Ebinary+coded+decimal%3C%2Fa%3E+arithmetic+needed+by+accounting+systems.%0D%0AAlmost+all+following+computers+included+these+innovations+in+some+form.+This+basic+set+of+features+is+now+called+a+%22%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FComplex_Instruction_Set_Computer%22+title%3D%22Complex+Instruction+Set+Computer%22%3EComplex+Instruction+Set+Computer%3C%2Fa%3E%2C%22++or+CISC+%28pronounced+%22sisk%22%29%2C+a+term+not+invented+until+many+years++later%2C+when+RISC+%28Reduced+Instruction+Set+Computer%29+begun+to+get+market++share.%0D%0AIn+many+CISCs%2C+an+instruction+could+access+either+registers+or++memory%2C+usually+in+several+different+ways.+This+made+the+CISCs+easier+to++program%2C+because+a+programmer+could+remember+just+thirty+to+a+hundred++instructions%2C+and+a+set+of+three+to+ten+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAddressing_mode%22+title%3D%22Addressing+mode%22%3Eaddressing+modes%3C%2Fa%3E+rather+than+thousands+of+distinct+instructions.+This+was+called+an+%22%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FOrthogonal_instruction_set%22+title%3D%22Orthogonal+instruction+set%22%3Eorthogonal+instruction+set%3C%2Fa%3E.%22+The+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPDP-11%22+title%3D%22PDP-11%22%3EPDP-11%3C%2Fa%3E+and+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMotorola_68000%22+title%3D%22Motorola+68000%22%3EMotorola+68000%3C%2Fa%3E+architecture+are+examples+of+nearly+orthogonal+instruction+sets.%0D%0AThere+was+also+the+%3Ci%3E%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBUNCH%22+title%3D%22BUNCH%22%3EBUNCH%3C%2Fa%3E%3C%2Fi%3E+%28%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBurroughs_Corporation%22+title%3D%22Burroughs+Corporation%22%3EBurroughs%3C%2Fa%3E%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FUNIVAC%22+title%3D%22UNIVAC%22%3EUNIVAC%3C%2Fa%3E%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FNCR_Corporation%22+title%3D%22NCR+Corporation%22%3ENCR%3C%2Fa%3E%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FControl_Data_Corporation%22+title%3D%22Control+Data+Corporation%22%3EControl+Data+Corporation%3C%2Fa%3E%2C+and+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHoneywell%22+title%3D%22Honeywell%22%3EHoneywell%3C%2Fa%3E%29+that+competed+against+IBM+at+this+time+though+IBM+dominated+the+era+with+S%2F360.%0D%0AThe+Burroughs+Corporation+%28which+later+merged+with+Sperry%2FUnivac+to+become+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FUnisys%22+title%3D%22Unisys%22%3EUnisys%3C%2Fa%3E%29+offered+an+alternative+to+S%2F360+with+their+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBurroughs_large_systems%22+title%3D%22Burroughs+large+systems%22%3EB5000%3C%2Fa%3E++series+machines.+In+1961%2C+the+B5000+had+virtual+memory%2C+symmetric++multiprocessing%2C+a+multi-programming+operating+system+%28Master+Control++Program+or+MCP%29%2C+written+in+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FALGOL_60%22+title%3D%22ALGOL+60%22%3EALGOL+60%3C%2Fa%3E%2C+and+the+industry%27s+first+recursive-descent+compilers+as+early+as+1963.%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%221970s%3A_Large_Scale_Integration%22%3E1970s%3A+Large+Scale+Integration%3C%2Fspan%3E%3C%2Fh2%3E%0D%0AIn+the+1960s%2C+the+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FApollo_guidance_computer%22+title%3D%22Apollo+guidance+computer%22%3EApollo+guidance+computer%3C%2Fa%3E+and+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMinuteman_missile%22+title%3D%22Minuteman+missile%22%3EMinuteman+missile%3C%2Fa%3E+made+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntegrated_circuit%22+title%3D%22Integrated+circuit%22%3Eintegrated+circuit%3C%2Fa%3E+economical+and+practical.%0D%0A%3Cdiv+class%3D%22thumb+tright%22%3E%0D%0A+%3Cdiv+class%3D%22thumbinner%22+style%3D%22width%3A+182px%3B%22%3E%0D%0A%3Ca+class%3D%22image%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FFile%3AKL_Intel_C8008-1.jpg%22%3E%3Cimg+alt%3D%22%22+class%3D%22thumbimage%22+height%3D%22115%22+src%3D%22http%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2Fthumb%2Fb%2Fba%2FKL_Intel_C8008-1.jpg%2F180px-KL_Intel_C8008-1.jpg%22+width%3D%22180%22+%2F%3E%3C%2Fa%3E+%3Cdiv+class%3D%22thumbcaption%22%3E%0D%0A+%3Cdiv+class%3D%22magnify%22%3E%0D%0A%3Ca+class%3D%22internal%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FFile%3AKL_Intel_C8008-1.jpg%22+title%3D%22Enlarge%22%3E%3Cimg+alt%3D%22%22+height%3D%2211%22+src%3D%22http%3A%2F%2Fbits.wikimedia.org%2Fstatic-1.20wmf4%2Fskins%2Fcommon%2Fimages%2Fmagnify-clip.png%22+width%3D%2215%22+%2F%3E%3C%2Fa%3E%3C%2Fdiv%3E%0D%0AAn+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntel_8008%22+title%3D%22Intel+8008%22%3EIntel+8008%3C%2Fa%3E+Microprocessor%3C%2Fdiv%3E%0D%0A%3C%2Fdiv%3E%0D%0A%3C%2Fdiv%3E%0D%0AAround+1971%2C+the+first+calculator+and+clock+chips+began+to+show+that+very+small+computers+might+be+possible.+The+first+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMicroprocessor%22+title%3D%22Microprocessor%22%3Emicroprocessor%3C%2Fa%3E+was+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntel_4004%22+title%3D%22Intel+4004%22%3EIntel+4004%3C%2Fa%3E%2C+designed+in+1971+for+a+calculator+company+%28%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBusicom%22+title%3D%22Busicom%22%3EBusicom%3C%2Fa%3E%29%2C+and+produced+by+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntel%22+title%3D%22Intel%22%3EIntel%3C%2Fa%3E.++In+1972%2C+Intel+introduced+a+microprocessor+having+a+different++architecture%3A+the+8008.+The+8008+is+the+direct+ancestor+of+the+current+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntel_Core%23Core_i7%22+title%3D%22Intel+Core%22%3ECore+i7%3C%2Fa%3E%2C++even+now+maintaining+code+compatibility+%28every+instruction+of+the++8008%27s+instruction+set+has+a+direct+equivalent+in+the+Intel+Core+i7%27s++much+larger+instruction+set%2C+although+the+opcode+values+are+different%29.%0D%0ABy+the+mid-1970s%2C+the+use+of+integrated+circuits+in+computers+was++commonplace.+The+whole+decade+consists+of+upheavals+caused+by+the++shrinking+price+of+transistors.%0D%0AIt+became+possible+to+put+an+entire+CPU+on+a+single+printed+circuit++board.+The+result+was+that+minicomputers%2C+usually+with+16-bit+words%2C+and++4k+to+64K+of+memory%2C+came+to+be+commonplace.%0D%0ACISCs+were+believed+to+be+the+most+powerful+types+of+computers%2C++because+their+microcode+was+small+and+could+be+stored+in+very+high-speed++memory.+The+CISC+architecture+also+addressed+the+%22semantic+gap%22+as+it++was+perceived+at+the+time.+This+was+a+defined+distance+between+the++machine+language%2C+and+the+higher+level+language+people+used+to+program+a++machine.+It+was+felt+that+compilers+could+do+a+better+job+with+a+richer++instruction+set.%0D%0ACustom+CISCs+were+commonly+constructed+using+%22bit+slice%22+computer++logic+such+as+the+AMD+2900+chips%2C+with+custom+microcode.+A+bit+slice++component+is+a+piece+of+an+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FArithmetic_logic_unit%22+title%3D%22Arithmetic+logic+unit%22%3EALU%3C%2Fa%3E%2C+register+file+or+microsequencer.+Most+bit-slice+integrated+circuits+were+4-bits+wide.%0D%0ABy+the+early+1970s%2C+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPDP-11%22+title%3D%22PDP-11%22%3EPDP-11%3C%2Fa%3E++was+developed%2C+arguably+the+most+advanced+small+computer+of+its+day.++Almost+immediately%2C+wider-word+CISCs+were+introduced%2C+the+32-bit+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVAX%22+title%3D%22VAX%22%3EVAX%3C%2Fa%3E+and+36-bit+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FPDP-10%22+title%3D%22PDP-10%22%3EPDP-10%3C%2Fa%3E.%0D%0AAlso%2C+to+control+a+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCruise_missile%22+title%3D%22Cruise+missile%22%3Ecruise+missile%3C%2Fa%3E%2C+Intel+developed+a+more-capable+version+of+its+8008+microprocessor%2C+the+8080.%0D%0AIBM+continued+to+make+large%2C+fast+computers.+However+the+definition++of+large+and+fast+now+meant+more+than+a+megabyte+of+RAM%2C+clock+speeds++near+one+megahertz+%3Ca+class%3D%22external+autonumber%22+href%3D%22http%3A%2F%2Fwww.hometoys.com%2Fmentors%2Fcaswell%2Fsep00%2Ftrends01.htm%22+rel%3D%22nofollow%22%3E%5B1%5D%3C%2Fa%3E%3Ca+class%3D%22external+autonumber%22+href%3D%22http%3A%2F%2Fresearch.microsoft.com%2Fusers%2FGBell%2FComputer_Structures_Principles_and_Examples%2Fcsp0727.htm%22+rel%3D%22nofollow%22%3E%5B2%5D%3C%2Fa%3E%2C+and+tens+of+megabytes+of+disk+drives.%0D%0AIBM%27s+System+370+was+a+version+of+the+360+tweaked+to+run+virtual+computing+environments.+The+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVM_%2528Operating_system%2529%22+title%3D%22VM+%28Operating+system%29%22%3Evirtual+computer%3C%2Fa%3E+was+developed+in+order+to+reduce+the+possibility+of+an+unrecoverable+software+failure.%0D%0AThe+Burroughs+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBurroughs_large_systems%22+title%3D%22Burroughs+large+systems%22%3EB5000%2FB6000%2FB7000%3C%2Fa%3E+series+reached+its+largest+market+share.+It+was+a+stack+computer+whose+OS+was+programmed+in+a+dialect+of+Algol.%0D%0AAll+these+different+developments+competed+for+market+share.%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%22Early_1980s%3A_the_lessons_of_RISC%22%3EEarly+1980s%3A+the+lessons+of+RISC%3C%2Fspan%3E%3C%2Fh2%3E%0D%0AIn+the+early+1980s%2C+researchers+at+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FUC_Berkeley%22+title%3D%22UC+Berkeley%22%3EUC+Berkeley%3C%2Fa%3E+and+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM%22+title%3D%22IBM%22%3EIBM%3C%2Fa%3E+both+discovered+that+most+computer+language+compilers+and+interpreters+used+only+a+small+subset+of+the+instructions+of+a+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FComplex_instruction_set_computer%22+title%3D%22Complex+instruction+set+computer%22%3ECISC%3C%2Fa%3E.++Much+of+the+power+of+the+CPU+was+simply+being+ignored+in+real-world++use.+They+realized+that+by+making+the+computer+simpler+and+less++orthogonal%2C+they+could+make+it+faster+and+less+expensive+at+the+same++time.%0D%0AAt+the+same+time%2C+CPU+calculation+became+faster+in+relation+to+the++time+for+necessary+memory+accesses.+Designers+also+experimented+with++using+large+sets+of+internal+registers.+The+idea+was+to+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCPU_cache%22+title%3D%22CPU+cache%22%3Ecache%3C%2Fa%3E+intermediate+results+in+the+registers+under+the+control+of+the+compiler.+This+also+reduced+the+number+of+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAddressing_mode%22+title%3D%22Addressing+mode%22%3Eaddressing+modes%3C%2Fa%3E+and+orthogonality.%0D%0AThe+computer+designs+based+on+this+theory+were+called+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FReduced_Instruction_Set_Computer%22+title%3D%22Reduced+Instruction+Set+Computer%22%3EReduced+Instruction+Set+Computers%3C%2Fa%3E%2C++or+RISC.+RISCs+generally+had+larger+numbers+of+registers%2C+accessed+by++simpler+instructions%2C+with+a+few+instructions+specifically+to+load+and++store+data+to+memory.+The+result+was+a+very+simple+core+CPU+running+at++very+high+speed%2C+supporting+the+exact+sorts+of+operations+the+compilers++were+using+anyway.%0D%0AA+common+variation+on+the+RISC+design+employs+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHarvard_architecture%22+title%3D%22Harvard+architecture%22%3EHarvard+architecture%3C%2Fa%3E%2C+as+opposed+to+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVon_Neumann_architecture%22+title%3D%22Von+Neumann+architecture%22%3EVon+Neumann%3C%2Fa%3E++or+Stored+Program+architecture+common+to+most+other+designs.+In+a++Harvard+Architecture+machine%2C+the+program+and+data+occupy+separate++memory+devices+and+can+be+accessed+simultaneously.+In+Von+Neumann++machines+the+data+and+programs+are+mixed+in+a+single+memory+device%2C++requiring+sequential+accessing+which+produces+the+so-called+%22Von+Neumann++bottleneck.%22%0D%0AOne+downside+to+the+RISC+design+has+been+that+the+programs+that+run+on+them+tend+to+be+larger.+This+is+because+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCompiler%22+title%3D%22Compiler%22%3Ecompilers%3C%2Fa%3E++have+to+generate+longer+sequences+of+the+simpler+instructions+to++accomplish+the+same+results.+Since+these+instructions+need+to+be+loaded++from+memory+anyway%2C+the+larger+code+size+offsets+some+of+the+RISC++design%27s+fast+memory+handling.%0D%0ARecently%2C+engineers+have+found+ways+to+compress+the+reduced++instruction+sets+so+they+fit+in+even+smaller+memory+systems+than+CISCs.++Examples+of+such+compression+schemes+include+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FARM_architecture%22+title%3D%22ARM+architecture%22%3Ethe+ARM%3C%2Fa%3E%27s++%22Thumb%22+instruction+set.+In+applications+that+do+not+need+to+run+older++binary+software%2C+compressed+RISCs+are+coming+to+dominate+sales.%0D%0AAnother+approach+to+RISCs+was+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMinimal_instruction_set_computer%22+title%3D%22Minimal+instruction+set+computer%22%3EMISC%3C%2Fa%3E%2C+%22%3Ca+class%3D%22new%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DNiladic%26amp%3Baction%3Dedit%26amp%3Bredlink%3D1%22+title%3D%22Niladic+%28page+does+not+exist%29%22%3Eniladic%3C%2Fa%3E%22++or+%22zero-operand%22+instruction+set.+This+approach+realized+that+the++majority+of+space+in+an+instruction+was+to+identify+the+operands+of+the++instruction.+These+machines+placed+the+operands+on+a+push-down+%28last-in%2C++first+out%29+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FStack_%2528data_structure%2529%22+title%3D%22Stack+%28data+structure%29%22%3Estack%3C%2Fa%3E.++The+instruction+set+was+supplemented+with+a+few+instructions+to+fetch++and+store+memory.+Most+used+simple+caching+to+provide+extremely+fast++RISC+machines%2C+with+very+compact+code.+Another+benefit+was+that+the++interrupt+latencies+were+extremely+small%2C+smaller+than+most+CISC++machines+%28a+rare+trait+in+RISC+machines%29.+The+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBurroughs_large_systems%22+title%3D%22Burroughs+large+systems%22%3EBurroughs+large+systems%3C%2Fa%3E++architecture+uses+this+approach.+The+B5000+was+designed+in+1961%2C+long++before+the+term+%22RISC%22+was+invented.+The+architecture+puts+six+8-bit++instructions+in+a+48-bit+word%2C+and+was+a+precursor+to+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVLIW%22+title%3D%22VLIW%22%3EVLIW%3C%2Fa%3E+design+%28see+below%3A+1990+to+Today%29.%0D%0AThe+Burroughs+architecture+was+one+of+the+inspirations+for+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCharles_H._Moore%22+title%3D%22Charles+H.+Moore%22%3ECharles+H.+Moore%3C%2Fa%3E%27s+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FForth_%2528programming_language%2529%22+title%3D%22Forth+%28programming+language%29%22%3EForth+programming+language%3C%2Fa%3E%2C++which+in+turn+inspired+his+later+MISC+chip+designs.+For+example%2C+his++f20+cores+had+31+5-bit+instructions%2C+which+were+fit+four+to+a+20-bit++word.%0D%0ARISC+chips+now+dominate+the+market+for+32-bit+embedded+systems.++Smaller+RISC+chips+are+even+becoming+common+in+the+cost-sensitive+8-bit++embedded-system+market.+The+main+market+for+RISC+CPUs+has+been+systems++that+require+low+power+or+small+size.%0D%0AEven+some+CISC+processors+%28based+on+architectures+that+were+created+before+RISC+became+dominant%29%2C+such+as+newer+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FX86%22+title%3D%22X86%22%3Ex86%3C%2Fa%3E+processors%2C+translate+instructions+internally+into+a+RISC-like+instruction+set.%0D%0AThese+numbers+may+surprise+many%2C+because+the+%22market%22+is+perceived+to++be+desktop+computers.+x86+designs+dominate+desktop+and+notebook++computer+sales%2C+but+desktop+and+notebook+computers+are+only+a+tiny++fraction+of+the+computers+now+sold.+Most+people+in+industrialised++countries+own+more+computers+in+embedded+systems+in+their+car+and+house++than+on+their+desks.%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%22Mid-to-late_1980s%3A_exploiting_instruction_level_parallelism%22%3EMid-to-late+1980s%3A+exploiting+instruction+level+parallelism%3C%2Fspan%3E%3C%2Fh2%3E%0D%0AIn+the+mid-to-late+1980s%2C+designers+began+using+a+technique+known+as+%22%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FInstruction_pipelining%22+title%3D%22Instruction+pipelining%22%3Einstruction+pipelining%3C%2Fa%3E%22%2C++in+which+the+processor+works+on+multiple+instructions+in+different++stages+of+completion.+For+example%2C+the+processor+may+be+retrieving+the++operands+for+the+next+instruction+while+calculating+the+result+of+the++current+one.+Modern+CPUs+may+use+over+a+dozen+such+stages.+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMinimal_instruction_set_computer%22+title%3D%22Minimal+instruction+set+computer%22%3EMISC%3C%2Fa%3E+processors+achieve+single-cycle+execution+of+instructions+without+the+need+for+pipelining.%0D%0AA+similar+idea%2C+introduced+only+a+few+years+later%2C+was+to+execute+multiple+instructions+in+parallel+on+separate+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FArithmetic_logic_unit%22+title%3D%22Arithmetic+logic+unit%22%3Earithmetic+logic+units%3C%2Fa%3E++%28ALUs%29.+Instead+of+operating+on+only+one+instruction+at+a+time%2C+the+CPU++will+look+for+several+similar+instructions+that+are+not+dependent+on++each+other%2C+and+execute+them+in+parallel.+This+approach+is+called+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSuperscalar%22+title%3D%22Superscalar%22%3Esuperscalar%3C%2Fa%3E+processor+design.%0D%0ASuch+techniques+are+limited+by+the+degree+of+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FInstruction_level_parallelism%22+title%3D%22Instruction+level+parallelism%22%3Einstruction+level+parallelism%3C%2Fa%3E++%28ILP%29%2C+the+number+of+non-dependent+instructions+in+the+program+code.++Some+programs+are+able+to+run+very+well+on+superscalar+processors+due+to++their+inherent+high+ILP%2C+notably+graphics.+However+more+general++problems+do+not+have+such+high+ILP%2C+thus+making+the+achievable+speedups++due+to+these+techniques+to+be+lower.%0D%0ABranching+is+one+major+culprit.+For+example%2C+the+program+might+add++two+numbers+and+branch+to+a+different+code+segment+if+the+number+is++bigger+than+a+third+number.+In+this+case+even+if+the+branch+operation+is++sent+to+the+second+ALU+for+processing%2C+it+still+must+wait+for+the++results+from+the+addition.+It+thus+runs+no+faster+than+if+there+were++only+one+ALU.+The+most+common+solution+for+this+type+of+problem+is+to++use+a+type+of+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FBranch_prediction%22+title%3D%22Branch+prediction%22%3Ebranch+prediction%3C%2Fa%3E.%0D%0ATo+further+the+efficiency+of+multiple+functional+units+which+are+available+in+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSuperscalar%22+title%3D%22Superscalar%22%3Esuperscalar%3C%2Fa%3E+designs%2C+operand+register+dependencies+was+found+to+be+another+limiting+factor.+To+minimize+these+dependencies%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FOut-of-order_execution%22+title%3D%22Out-of-order+execution%22%3Eout-of-order+execution%3C%2Fa%3E++of+instructions+was+introduced.+In+such+a+scheme%2C+the+instruction++results+which+complete+out-of-order+must+be+re-ordered+in+program+order++by+the+processor+for+the+program+to+be+restartable+after+an+exception.+%3Ci%3EOut-of-Order%3C%2Fi%3E+execution+was+the+main+advancement+of+the+computer+industry+during+the+1990s.+A+similar+concept+is+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSpeculative_execution%22+title%3D%22Speculative+execution%22%3Especulative+execution%3C%2Fa%3E%2C++where+instructions+from+one+direction+of+a+branch+%28the+predicted++direction%29+are+executed+before+the+branch+direction+is+known.+When+the++branch+direction+is+known%2C+the+predicted+direction+and+the+actual++direction+are+compared.+If+the+predicted+direction+was+correct%2C+the++speculatively-executed+instructions+and+their+results+are+kept%3B+if+it++was+incorrect%2C+these+instructions+and+their+results+are+thrown+out.++Speculative+execution+coupled+with+an+accurate+branch+predictor+gives+a++large+performance+gain.%0D%0AThese+advances%2C+which+were+originally+developed+from+research+for+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FRISC%22+title%3D%22RISC%22%3ERISC%3C%2Fa%3E-style++designs%2C+allow+modern+CISC+processors+to+execute+twelve+or+more++instructions+per+clock+cycle%2C+when+traditional+CISC+designs+could+take++twelve+or+more+cycles+to+execute+just+one+instruction.%0D%0AThe+resulting+instruction+scheduling+logic+of+these+processors+is++large%2C+complex+and+difficult+to+verify.+Furthermore%2C+the+higher++complexity+requires+more+transistors%2C+increasing+power+consumption+and++heat.+In+this+respect+RISC+is+superior+because+the+instructions+are++simpler%2C+have+less+interdependence+and+make+superscalar+implementations++easier.+However%2C+as+Intel+has+demonstrated%2C+the+concepts+can+be+applied++to+a+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FComplex_instruction_set_computer%22+title%3D%22Complex+instruction+set+computer%22%3ECISC%3C%2Fa%3E+design%2C+given+enough+time+and+money.%0D%0A%3Cdl%3E%3Cdd%3EHistorical+note%3A+Some+of+these+techniques+%28e.g.+pipelining%29+were+originally+developed+in+the+late+1950s+by+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FInternational_Business_Machines%22+title%3D%22International+Business+Machines%22%3EIBM%3C%2Fa%3E+on+their+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIBM_7030%22+title%3D%22IBM+7030%22%3EStretch%3C%2Fa%3E+mainframe+computer.%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%221990_to_today%3A_looking_forward%22%3E1990+to+today%3A+looking+forward%3C%2Fspan%3E%3C%2Fh2%3E%0D%0A%3Ch2%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%221990_to_today%3A_looking_forward%22%3E%26nbsp%3B%3C%2Fspan%3E%3C%2Fh2%3E%0D%0A%3Ch3%3E%0D%0A%3Cspan+class%3D%22mw-headline%22+id%3D%22VLIW_and_EPIC%22%3EVLIW+and+EPIC%3C%2Fspan%3E%3C%2Fh3%3E%0D%0AThe+instruction+scheduling+logic+that+makes+a+superscalar+processor++is+just+boolean+logic.+In+the+early+1990s%2C+a+significant+innovation+was++to+realize+that+the+coordination+of+a+multiple-ALU+computer+could+be++moved+into+the+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FCompiler%22+title%3D%22Compiler%22%3Ecompiler%3C%2Fa%3E%2C+the+software+that+translates+a+programmer%27s+instructions+into+machine-level+instructions.%0D%0AThis+type+of+computer+is+called+a+%3Cb%3E%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FVery_long_instruction_word%22+title%3D%22Very+long+instruction+word%22%3Every+long+instruction+word%3C%2Fa%3E%3C%2Fb%3E+%28VLIW%29+computer.%0D%0AStatically+scheduling+the+instructions+in+the+compiler+%28as+opposed+to++letting+the+processor+do+the+scheduling+dynamically%29+can+reduce+CPU++complexity.+This+can+improve+performance%2C+reduce+heat%2C+and+reduce+cost.%0D%0AUnfortunately%2C+the+compiler+lacks+accurate+knowledge+of+runtime++scheduling+issues.+Merely+changing+the+CPU+core+frequency+multiplier++will+have+an+effect+on+scheduling.+Actual+operation+of+the+program%2C+as++determined+by+input+data%2C+will+have+major+effects+on+scheduling.+To++overcome+these+severe+problems+a+VLIW+system+may+be+enhanced+by+adding++the+normal+dynamic+scheduling%2C+losing+some+of+the+VLIW+advantages.%0D%0AStatic+scheduling+in+the+compiler+also+assumes+that+dynamically+generated+code+will+be+uncommon.+Prior+to+the+creation+of+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FJava_Virtual_Machine%22+title%3D%22Java+Virtual+Machine%22%3EJava%3C%2Fa%3E%2C+this+was+in+fact+true.+It+was+reasonable+to+assume+that+slow+compiles+would+only+affect+software+developers.+Now%2C+with+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FJust-in-time_compilation%22+title%3D%22Just-in-time+compilation%22%3EJIT%3C%2Fa%3E+virtual+machines+for+Java+and+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2F.NET_Framework%22+title%3D%22.NET+Framework%22%3E.NET%3C%2Fa%3E%2C+slow+code+generation+affects+users+as+well.%0D%0AThere+were+several+unsuccessful+attempts+to+commercialize+VLIW.+The++basic+problem+is+that+a+VLIW+computer+does+not+scale+to+different+price++and+performance+points%2C+as+a+dynamically+scheduled+computer+can.+Another++issue+is+that+compiler+design+for+VLIW+computers+is+extremely++difficult%2C+and+the+current+crop+of+compilers+%28as+of+2005%29+don%27t+always++produce+optimal+code+for+these+platforms.%0D%0AAlso%2C+VLIW+computers+optimise+for+throughput%2C+not+low+latency%2C+so++they+were+not+attractive+to+the+engineers+designing+controllers+and++other+computers+embedded+in+machinery.+The+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEmbedded_system%22+title%3D%22Embedded+system%22%3Eembedded+systems%3C%2Fa%3E++markets+had+often+pioneered+other+computer+improvements+by+providing+a++large+market+that+did+not+care+about+compatibility+with+older+software.%0D%0AIn+January+2000%2C+a+company+called+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FTransmeta%22+title%3D%22Transmeta%22%3ETransmeta%3C%2Fa%3E++took+the+interesting+step+of+placing+a+compiler+in+the+central++processing+unit%2C+and+making+the+compiler+translate+from+a+reference+byte++code+%28in+their+case%2C+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FX86%22+title%3D%22X86%22%3Ex86%3C%2Fa%3E++instructions%29+to+an+internal+VLIW+instruction+set.+This+approach++combines+the+hardware+simplicity%2C+low+power+and+speed+of+VLIW+RISC+with++the+compact+main+memory+system+and+software+reverse-compatibility++provided+by+popular+CISC.%0D%0A%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FIntel%22+title%3D%22Intel%22%3EIntel%3C%2Fa%3E%27s+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FItanium%22+title%3D%22Itanium%22%3EItanium%3C%2Fa%3E+chip+is+based+on+what+they+call+an+%3Ca+class%3D%22mw-redirect%22+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FExplicitly_Parallel_Instruction_Computing%22+title%3D%22Explicitly+Parallel+Instruction+Computing%22%3EExplicitly+Parallel+Instruction+Computing%3C%2Fa%3E++%28EPIC%29+design.+This+design+supposedly+provides+the+VLIW+advantage+of++increased+instruction+throughput.+However%2C+it+avoids+some+of+the+issues++of+scaling+and+complexity%2C+by+explicitly+providing+in+each+%22bundle%22+of++instructions+information+concerning+their+dependencies.+This+information++is+calculated+by+the+compiler%2C+as+it+would+be+in+a+VLIW+design.+The++early+versions+are+also+backward-compatible+with+current+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FX86%22+title%3D%22X86%22%3Ex86%3C%2Fa%3E+software+by+means+of+an+on-chip+%3Ca+href%3D%22http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEmulator%22+title%3D%22Emulator%22%3Eemulation%3C%2Fa%3E+mode.+Integer+performance+was+disappointing+and+despite+improvements%2C+sales+in+volume+markets+continue+to+be+low.%0D%0A%3C%21--more--%3E%3C%2Fdd%3E%3C%2Fdl%3E%0D%0A%0D%0A%0D%0A%0D%0A%3C%2Fdiv%3E%0D%0A

Computer data storage

1 GB of SDRAM mounted in a personal computer. An example of primary storage.
40 GB PATA hard disk drive (HDD); when connected to a computer it serves as secondary storage.
160 GB SDLT tape cartridge, an example of off-line storage. When used within a robotic tape library, it is classified as tertiary storage instead.
Computer data storage, often called storage or memory, refers to computer components and recording media that retain digital data. Data storage is a core function and fundamental component of computers.
In contemporary usage, 'memory' usually refers to semiconductor storage read-write random-access memory, typically DRAM (Dynamic-RAM). Memory can refer to other forms of fast but temporary storage. Storage refers to storage devices and their media not directly accessible by the CPU, (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but are non-volatile (retaining contents when powered down).[1] Historically, memory has been called core, main memory, real storage or internal memory while storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
The distinctions are fundamental to the architecture of computers. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.
Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose.
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (forty million bits) with one byte per character.
The defining component of a computer is the central processing unit (CPU, or simply processor), because it operates on data, performs computations, and controls other components. In the most commonly used computer architecture, the CPU consists of two main parts: Control Unit and Arithmetic Logic Unit (ALU). The former controls the flow of data between the CPU and memory; the later performs arithmetic and logical operations on data.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialised devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
In practice, almost all computers use a variety of memory types, organized in a storage hierarchy around the CPU, as a trade-off between performance and cost. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.

Data storage device

Many different consumer electronic devices can store data.
Edison cylinder phonograph ca. 1899. The Phonograph cylinder is a storage medium. The phonograph may or may not be considered a storage device.
A reel-to-reel tape recorder (Sony TC-630). The magnetic tape is a data storage medium. The recorder is data storage equipment using a portable medium (tape reel) to store the data.
Crafting tools such as paint brushes can be used as data storage equipment. The paint and canvas can be used as data storage media.
RNA might be the oldest data storage medium,.[1]
A data storage device is a device for recording (storing) information (data). Recording can be done using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic vibrations in phonographic recording, to electromagnetic energy modulating magnetic tape and optical discs.
A storage device may hold information, process information, or both. A device that only holds information is a recording medium. Devices that process information (data storage equipment) may either access a separate portable (removable) recording medium or a permanent component to store and retrieve information.
Electronic data storage is storage which requires electrical power to store and retrieve that data. Most storage devices that do not require vision and a brain to read data fall into this category. Electromagnetic data may be stored in either an analog or digital format on a variety of media. This type of data is considered to be electronically encoded data, whether or not it is electronically stored in a semiconductor device, for it is certain that a semiconductor device was used to record it on its medium. Most electronically processed data storage media (including some forms of computer data storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when power is removed from the device. In contrast, most electronically stored information within most types of semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is removed.
With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be more cost effective than alternative methods due to smaller physical space requirements and the ease of replacing (rewriting) data on the same medium. However, the durability of methods such as printed data is still superior to that of most electronic storage media. The durability limitations may be overcome with the ease of duplicating (backing-up) electronic data.

Magnetic storage

Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2011, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference. Other examples of magnetic storage media include floppy disks, magnetic recording tape, and magnetic stripes on credit cards.


Contents

 

History

Magnetic storage in the form of audio recording on a wire was publicized by Oberlin Smith in 1888. He filed a patent in September, 1878 but did not pursue the idea as his business was machine tools. The first publicly demonstrated (Paris Exposition of 1900) magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device recorded a signal on a wire wrapped around a drum. In 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals. Computer and now most audio and video magnetic storage devices record digital data.
In old computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Unlike modern computers, magnetic tape was also often used for secondary storage.

Digital

A digital system[1] is a data technology that uses discrete (discontinuous) values. By contrast, non-digital (or analog) systems represent information using a continuous function. Although digital representations are discrete, the information represented can be either discrete, such as numbers and letters or continuous, such as sounds, images, and other measurements.
The word digital comes from the same source as the word digit and digitus (the Latin word for finger), as fingers are used for discrete counting. It is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography.


Digital noise

When data is transmitted, or indeed handled at all, a certain amount of noise enters into the signal. Noise can have several causes: data transmitted wirelessly, such as by radio, may be received inaccurately, suffer interference from other wireless sources, or pick up background noise from the rest of the universe. Microphones pick up both the intended signal as well as background noise without discriminating between signal and noise, so when audio is encoded digitally, it typically already includes noise.
Electric pulses transmitted via wires are typically attenuated by the resistance of the wire, and changed by its capacitance or inductance. Temperature variations can increase or reduce these effects. While digital transmissions are also degraded, slight variations do not matter since they are ignored when the signal is received. With an analog signal, variances cannot be distinguished from the signal and so provide a kind of distortion. In a digital signal, similar variances will not matter, as any signal close enough to a particular value will be interpreted as that value. Care must be taken to avoid noise and distortion when connecting digital and analog systems, but more when using analog systems.


Symbol to digital conversion

Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used.
A symbol input device usually consists of a number of switches that are polled at regular intervals to see which switches are pressed. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt to alert the CPU to read it.
For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word.
Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded, or converted into a number, based on the status of modifier keys and the desired character encoding.
A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard.


Properties of digital information

All digital information possesses common properties that distinguish it from analog communications methods:
  • Synchronization: Since digital information is conveyed by the sequence in which symbols are ordered, all digital schemes have some method for determining the beginning of a sequence. In written or spoken human languages synchronization is typically provided by pauses (spaces), capitalization, and punctuation. Machine communications typically use special synchronization sequences.
  • Language: All digital communications require a language, which in this context consists of all the information that the sender and receiver of the digital communication must both possess, in advance, in order for the communication to be successful. Languages are generally arbitrary and specify the meaning to be assigned to particular symbol sequences, the allowed range of values, methods to be used for synchronization, etc.
  • Errors: Disturbances (noise) in analog communications invariably introduce some, generally small deviation or error between the intended and actual communication. Disturbances in a digital communication do not result in errors unless the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturb the sequence of symbols. It is therefore generally possible to have an entirely error-free digital communication. Further, techniques such as check codes may be used to detect errors and guarantee error-free communications through redundancy or retransmission. Errors in digital communications can take the form of substitution errors in which a symbol is replaced by another symbol, or insertion/deletion errors in which an extra incorrect symbol is inserted into or deleted from a digital message. Uncorrected errors in digital communications have unpredictable and generally large impact on the information content of the communication.
  • Copying: Because of the inevitable presence of noise, making many successive copies of an analog communication is infeasible because each generation increases the noise. Because digital communications are generally error-free, copies of copies can be made indefinitely.
  • Granularity: When a continuously variable analog value is represented in digital form there is always a decision as to the number of symbols to be assigned to that value. The number of symbols determines the precision or resolution of the resulting datum. The difference between the actual analog value and the digital representation is known as quantization error. Example: the actual temperature is 23.234456544453 degrees but if only two digits (23) are assigned to this parameter in a particular digital representation (e.g. digital thermometer or table in a printed report) the quantizing error is: 0.234456544453. This property of digital communication is known as granularity.
  • Compressible: According to Miller, "Uncompressed digital data is very large, and in its raw form would actually produce a larger signal (therefore be more difficult to transfer) than analog data. However, digital data can be compressed. Compression reduces the amount of bandwidth space needed to send information. Data can be compressed, sent and then decompressed at the site of consumption. This makes it possible to send much more information and result in, for example, digital television signals offering more room on the airwave spectrum for more television channels."[2]
  •  
  •  
 

Historical digital systems

Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic.
  • Written text in books (due to the limited character set and the use of discrete symbols - the alphabet in most cases)
  • An abacus was created sometime between 1000 BC and 500 BC , it later become a form of calculation frequency, nowadays it can be used as a very advanced, yet basic digital calculator that uses beads on rows to represent numbers. Beads only have meaning in discrete up and down states, not in analog in-between states.
  • A beacon is perhaps the simplest non-electronic digital signal, with just two states (on and off). In particular, smoke signals are one of the oldest examples of a digital signal, where an analog "carrier" (smoke) is modulated with a blanket to generate a digital signal (puffs) that conveys information.
  • Morse code uses six digital states—dot, dash, intra-character gap (between each dot or dash), short gap (between each letter), medium gap (between words), and long gap (between sentences)—to send messages via a variety of potential carriers such as electricity or light, for example using an electrical telegraph or a flashing light.
  • The Braille system was the first binary format for character encoding, using a six-bit code rendered as dot patterns.
  • Flag semaphore uses rods or flags held in particular positions to send messages to the receiver watching them some distance away.
  • International maritime signal flags have distinctive markings that represent letters of the alphabet to allow ships to send messages to each other.
  • More recently invented, a modem modulates an analog "carrier" signal (such as sound) to encode binary electrical digital information, as a series of binary digital sound pulses. A slightly earlier, surprisingly reliable version of the same concept was to bundle a sequence of audio digital "signal" and "no signal" information (i.e. "sound" and "silence") on magnetic cassette tape for use with early home computers.

See also

Random access

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2011)


In computer science, random access (sometimes called direct access) is the ability to access an element at an arbitrary position in a sequence in equal time, independent of sequence size. The position is arbitrary in the sense that it is unpredictable, thus the use of the term "random" in "random access". The opposite is sequential access, where a remote element takes longer time to access.[1] A typical illustration of this distinction is to compare an ancient scroll (sequential; all material prior to the data needed must be unrolled) and the book (random: can be immediately flipped open to any random page). A more modern example is a cassette tape (sequential—you have to fast-forward through earlier songs to get to later ones) and a CD (random access—you can skip to the track you want).
In data structures, random access implies the ability to access any entry in a list in constant (i.e. independent of its position in the list and of list's size, i.e. O(1)) time. Very few data structures can guarantee this, other than arrays (and related structures like dynamic arrays). Random access is critical to many algorithms such as binary search, integer sorting or sieve of Eratosthenes. Other data structures, such as linked lists, sacrifice random access to make for efficient inserts, deletes, or reordering of data. Self-balancing binary search trees may provide an acceptable compromise, where access time is equal for any member of a collection and only grows logarithmically with its size.

Non-volatile memory

Non-volatile memory, nonvolatile memory, NVM or non-volatile storage is computer memory that can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, ferroelectric RAM (F-RAM), most types of magnetic computer storage devices (e.g. hard disks, floppy disks, and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards.
Non-volatile memory is typically used for the task of secondary storage, or long-term persistent storage. The most widely used form of primary storage today is a volatile form of random access memory (RAM), meaning that when the computer is shut down, anything contained in RAM is lost. Unfortunately, most forms of non-volatile memory have limitations that make them unsuitable for use as primary storage. Typically, non-volatile memory either costs more or performs worse than volatile random access memory.
Several companies are working on developing non-volatile memory systems comparable in speed and capacity to volatile RAM. IBM is currently developing MRAM (Magnetoresistive RAM). Not only would such technology save energy, but it would allow for computers that could be turned on and off almost instantly, bypassing the slow start-up and shutdown sequence. In addition, Ramtron International has developed, produced, and licensed ferroelectric RAM (F-RAM), a technology that offers distinct properties from other nonvolatile memory options, including extremely high endurance (exceeding 1016 for 3.3 V devices), ultra low power consumption (since F-RAM does not require a charge pump like other non-volatile memories), single-cycle write speeds, and gamma radiation tolerance. Other companies that have licensed and produced F-RAM technology include Texas Instruments, Rohm, and Fujitsu.
Non-volatile data storage can be categorized in electrically addressed systems (read-only memory) and mechanically addressed systems (hard disks, optical disc, magnetic tape, holographic memory, and such). Electrically addressed systems are expensive, but fast, whereas mechanically addressed systems have a low price per bit, but are slow. Non-volatile memory may one day eliminate the need for comparatively slow forms of secondary storage systems, which include hard disks.

Disk read-and-write head

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2011)


Disk read/write heads are the small parts of a disk drive, that move above the disk platter and transform platter's magnetic field into electrical current (read the disk) or vice versa – transform electrical current into magnetic field (write the disk).[1] The heads have gone through a number of changes over the years.

Description

In a hard drive, the heads 'fly' above the disk surface with clearance of as little as 3 nanometres. The "flying height" is constantly decreasing to enable higher areal density. The flying height of the head is controlled by the design of an air-bearing etched onto the disk-facing surface of the slider. The role of the air bearing is to maintain the flying height constant as the head moves over the surface of the disk. If the head hits the disk's surface, a catastrophic head crash can result.

Traditional head

The heads themselves started out similar to the heads in tape recorders—simple devices made out of a tiny C-shaped piece of highly magnetizable material called ferrite wrapped in a fine wire coil. When writing, the coil is energized, a strong magnetic field forms in the gap of the C, and the recording surface adjacent to the gap is magnetized. When reading, the magnetized material rotates past the heads, the ferrite core concentrates the field, and a current is generated in the coil. The gap where the field is very strong and quite narrow. That gap is roughly equal to the thickness of the magnetic media on the recording surface. The gap determines the minimum size of a recorded area on the disk. Ferrite heads are large, and write fairly large features. They must also be flown fairly far from the surface thus requiring stronger fields and larger heads.

Metal in Gap (MIG)

Metal in Gap (MIG) heads are ferrite heads with a small piece of metal in the head gap that concentrates the field. This allows smaller features to be read and written. MIG heads were replaced with thin film heads. Thin film heads were electronically similar to ferrite heads and used the same physics. But they were manufactured using photolithographic processes and thin films of material that allowed fine features to be created. Thin film heads were much smaller than MIG heads and therefore allowed smaller recorded features to be used. Thin film heads allowed 3.5 inch drives to reach 4GB storage capacities in 1995. The geometry of the head gap was a compromise between what worked best for reading and what worked best for writing.

Magnetoresistance and giant magnetoresistance

The next head improvement was to optimize the thin film head for writing and to create a separate head for reading. The separate read head uses the magnetoresistive (MR) effect which changes the resistance of a material in the presence of magnetic field. These MR heads are able to read very small magnetic features reliably, but can not be used to create the strong field used for writing. The term AMR (A=anisotropic) is used to distinguish it from the later introduced improvement in MR technology called GMR (giant magnetoresistance). The introduction of the AMR head in 1996 by IBM led to a period of rapid areal density increases of about 100% per year. In 2000 GMR, giant magnetoresistive, heads started to replace AMR read heads.

Tunneling magnetoresistive (TMR)

In 2005, the first drives to use tunneling MR (TMR) heads were introduced by Seagate allowing 400 GB drives with 3 disk platters. Seagate introduced TMR heads featuring integrated microscopic heater coils to control the shape of the transducer region of the head during operation. The heater can be activated prior to the start of a write operation to ensure proximity of the write pole to the disk/medium. This improves the written magnetic transitions by ensuring that the head's write field fully saturates the magnetic disk medium. The same thermal actuation approach can be used to temporarily decrease the separation between the disk medium and the read sensor during the readback process, thus improving signal strength and resolution. By mid-2006 other manufacturers have begun to use similar approaches in their products

Perpendicular magnetic recording (PMR)

During the same time frame a transition to perpendicular magnetic recording is occurring (PMR), in which for reasons of improved stability and higher areal density potential, the traditional in-plane orientation of magnetization in the disk is being changed to a perpendicular orientation. This has major implications for the write process and the write head structure, as well as for the design of the magnetic disk media or hard disk platter, less directly so for the read sensor of the magnetic head.