top of page

The state of Computer Architectures in practice

  • Writer: Mark Skilton
    Mark Skilton
  • Oct 30, 2007
  • 6 min read

The state of Computer Architecture and why its changing

In a recent webex lecture on Computer Architectures from Jan 07 by David Patterson, a professor of Computer Science, UC Berkeley, he covered the ground on the current state of Moors law and its impact from continuation through the development of parallelism or multi-core processors and a special observation on transactional memory.

This is already upon us with SUN announcing support for Transactional memory processing due out with its first generation of its Rock processors due out in the second half of next year ahead of IBM and AMD who are expected to follow soon. The Hardware vendors have “bet the farm” on this and announcing parallelism as a key strategy with the challenge to ISVs and developers to start to use this new technology.

  • Everything is changing in hardware and software

  • Old conventional wisdom is out

  • Governance – need desperately to create a “watering hole” to get people to understand what is changing and how this will affect the use of hardware and software.

  • The traditional programming model (uni-processor, serial processing) is changing through the introduction of parallelism (need languages that can use muliti-core processing.)

  • Hardware and software changes are driving a change through to user and developers who have to “deal with it”.

Observations on IT trends

  • Old versus conventional wisdom

  • New benchmarks for new architectures

  • Hardware building blocks changing

  • Human-centric programming is under pressure to manage the complexity of parallelism

  • Innovating at the HW/SW interface is clearly evident with hardware and software centric APIs and devices

  • Deconstructing operating systems most notably in virtualisation and the SaaS development creating support for Google and Nokia business models but putting pressure on vendors like Intel and Microsoft who current have millions of lines of code and support a business model that is code centric rather than thin client centric.

  • Building innovating computing such as seen in the quantum research area but more near term in parallelism and transaction computing.

  • Where is this going?

Conventional Wisdom (Old/Traditional IT)

There are a number of barriers being hit with the current technology

  • “Power wall” preventing scaling of transistors. Getting so small < 65 nm that they are hitting errors and physical operating problems. This is compounded by the increasing costs of the 65 nm mask, ECAD and costs of clock design reaching a point that makes returns on scaling uni-processor speeds and miniaturisation difficult to sustain. E.g. at 12 GHz processor is difficult by using today’s uni-processor technology to build.

  • “Memory wall” where the loads and stores are slow

  • “ILP Wall” from the increasing difficult to find enough parallelism for instruction level processing which tends to have code prediction difficulties. TLP (Thread level parallelism is proving better for multi-core processing of apps and is the logic behind the creating of multi-core CPUs.)

Multi-core CPUs will continue this trend of parallelism by treating the “processor is the new transistor “.

The impact of this is the need to write programs that use parallelism to take advantage of these technology changes.

What’s different ? Why are they doing it ?

Because it is the end of sequence processor era (uni-processor architecture) from Intel, IBM, SUN, AMD etc. We are moving from multi-programming to multi-threading.

Its not the code itself but the things you are trying to do with the technology stack (applications, hardware, programming model & system software)

Need to identify patterns of computation and communications.

Identify well defined targets from algorithms, software and architecture standpoint to achieve this.

Patterns are abstract so it won’t be the code that you are aiming at, it is the goaqls and capabilities.

Defining criteria for principles and patterns

Principles and patterns are used to define architecture such that the architecture, code, actions have a better chance of being able to be agile, flexible and better for the future.

Define an architecture to do this to help drive hardware and software value.

Define things that are realistic and viable. The aim is often to simplify and manage complexity. This paradigm comes up all the time in computer design and solution management with the trade-off of performance and capability, for example orchestration versus async rules processing.

A symptom to look for where principles and patterns are NOT being used in a business or project/program is the balance of effort and resources allocated to planning and verification versus design. Often planning and verification teams are bigger than the design team suggesting something is out of kilt (not working).

Hardware Architecture choices

  • How to connect

  • Patterns of choice

  • Two types of networks

  • Bandwidth oriented networks for data

  • Latency oriented networks

  • Virtualisation

  • In memory synchronisation (cache), Transactional memory - Full/empty states

  • Communication patterns – may be variable or stable

There is no simple best topology for all.

Programming models and OS

Many types of programming languages have been developed in the traditional programming world of uni-processing. And now this needs to embrace the parallelism world.

  • Hardware-centric

  • Application-centric

  • Formalism-centric

Most programming languages are designed for use by human beings.

It promotes a human-centric programming model.

This inherently has faults as it does not take into account the psychology of the human brain it its behaviour.

  • Success of a programming language is often affected by the human beings who use it

  • The human brain is not a good processor for decisions and storage. It is bias and short term-ist. It has no error checking and makes false assumptions.

  • Human-centric programming has to deal with efficiency, productivity and correctness.

Transaction memory will help with human-centric programming. In a sense transaction memory is there to reduce human error rates. This was verified in a Human subject research study into psychology impact on programming language success rates. It found that shared memory was better than message passing. So patterns and use of parallelism processing that enabled decoupling of dependency improve the wahy we can control the errors and pings.

Need to integrate human psychology with programming models / environments.

Operating Systems OS

Traditional approach has compilers with millions of lines of code that need to be optimised for execution. A 21st century technique is to use “auto tuning” to compile specific aspects of program run size not needing compiler intervention.

OS models are now defined independent of the number of processors

Deconstructing the OS is now part of the norm with

  • Virtual caches

  • Thin VMS

This is driving the use of thin VMS environments.

How to measure success in a Computer Architecture

  • Easy to write and execute

  • Maximising programmer productivity

  • Maximising application performance and energy efficiency

Challenges

  • Conventional serial performance issues

  • Minimising remote access

  • Balancing loads

  • Granularity of data movement and synchronisation

Problems with business case change

  • People not ready

  • Takes for ever to initiate

  • Software people don’t work without hardware

Transactional Memory Computer

Higher software and system performance

Enable parallel software

à better apps

Has failed in the past, why will it work this time?

  • There’s no alternative

  • Vendors converted to it

  • It is growing

  • Performance improving

  • Lower latency and higher bandwidths

  • Interoperability between vendors with standards

  • More standardisation of data standards

  • Open source movement growing

  • Framework to use to help set this up

Summary

Technology stack

  • Applications

  • Language

  • Compilers

  • Libraries

  • Networks

  • Architectures

  • Hardware

  • CAD

Need to look at the problem as a vertical integration perspective, but often tend to do horizontal innovation and solutions.

A large number of innovators “package” things together in IT resulting in large packages granularity versus many small packages

Infinite bandwidth, no latency with innovative packages

We fail if we make things difficult to program, difficult to use.

How to get granularity down to get parallelism

Contracting for performance, risk-reward sharing, resourcing

Takeaways for SOA

  • Parallelism is a new trend that has similarities with the new trend of SOA adoption.

  • Governance in architecture is key for traditional and new approaches

  • Principles and patterns are key

  • Architecture frameworks are important

  • Use of services supports abstraction and the principle of decoupling which in turn should reduce human related errors

  • SOA should define how to measure success as a way to drive the right behaviours in design and use

  • Need to look at the solution space as a holistic approach, not just a horizontal technology layer e.g. virtualisation, portals etc

  • Many vendors (and architects) try to package things up (increase granularity) but at the expense on simplicity e.g. orchestration versus simple exchange

  • Making things easier to use and program is central to reuse

Recent webex lecture on Computer Architectures and impact of multi-core by David Patterson, a professor of Computer Science, UC Berkeley.

 
 
 

Comentários


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

Mark Skilton    Copyright 2019  ©

  • White Twitter Icon
  • White Facebook Icon
  • White LinkedIn Icon
  • White YouTube Icon
  • White LinkedIn Icon
bottom of page