Pages

Subscribe:

Ads 468x60px

Labels

miércoles, 8 de febrero de 2012

How to Compile the Cluster[Contribution]


GNU compilers
The GNU compilers for C, C + + and Fortran 90 are installed on the cluster. To compile a program can use the following commands depending on the programming language used :

C
 gcc programa.c -o ejecutable

C++
g++ programa.c -o ejecutable  

Fortran90 
 gfortran programa.c -o ejecutable 


Intel compilers
C
 icc programa.c -o ejecutable
C++
icc programa.c -o ejecutable
Fortran77
ifort programa.c -o ejecutable
Fortran90
ifort programa.c -o ejecutable

OpenMPI
C
mpicc programa.c -o nombreEjecutable
C++
mpic++ programa.cpp -o nombreEjecutable
Fortran90
 mpif90 programa.f90 -o nombreEjecutable
 
Intel OpenMPI
By default GNU compilers are used together with MPI. To use Intel compilers then you must run the following commands for the different cases. This must be done in each console to open in the cluster. If you want to always use the Intel compilers then you must add these lines in the file. Bashrc located in your home directory 
C
export OMPI_MPICC=icc
c++
export OMPI_MPICXX=icc

http://m.web.ua.es/es/cluster-iuii/bibliotecas/librerias-paralelas.html 

contribution for this week
http://elisa.dyndns-web.com/progra/Compile
It shows how many libraries are compiled as is also different lenjuages

martes, 7 de febrero de 2012

Message Passing Interface (MPI) [Contribution]



Message Passing Interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or the C programming language. Several well-tested and efficient implementations of MPI include some that are free and in the public domain. These fostered the development of a parallel software industry, and there encouraged development of portable and scalable large-scale parallel applications.

To start your first code with MPI, you must include the library
 #include <mpi.h> /*para C/C++*/  


Install MPI

//   sudo aptitude install mpich-bin libmpich1.0-dev ssh

//   sudo /etc/init.d/ssh start


Configure SSH

$ ssh-keygen -t dsa
$ cd ~/.ssh
$ cat id_dsa.pub >> authorized_keys


Going to do a hello world as an example:
Hello World C
/* C Example */
#include <stdio.h>
#include <mpi.h>

int main (int argc, char** argv)
{
  int rank, size;

  MPI_Init (&argc, &argv);   /* starts MPI */
  MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
  MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
  printf( "Hello world from process %d of %d\n", rank, size );
  MPI_Finalize();
  return 0;
}
 //  mpicc hello.c -o hellompi
//  mpiexec -n 5 hellompi 
 Where the four numbers is the process 
Hello world C ++
/* C++ Example */
#include <mpi.h>
#include <iostream>
#include <stdio.h>

using namespace std;

int main (int argc,char *argv[])
{int rank,size;
 MPI_Init(&argc,&argv);
 MPI_Comm_size(MPI_COMM_WORLD, &size);
 MPI_Comm_rank(MPI_COMM_WORLD, &rank);
 printf("Hola Mundo desde procesador %d\n",rank);
 MPI_Finalize();
 return 0;
}  
mpic++ -o Hola HolaMundo.cpp
mpirun -np 4 Hola 
have to generate something like this
user@user ~$ mpirun -np 4 ./hellompi 
Hello world from process 0 of 4
Hello world from process 2 of 4
Hello world from process 1 of 4
Hello world from process 3 of 4 

http://www.open-mpi.org/
http://www.mancera.org/2010/12/08/montar-un-cluster-en-linux-ubuntu/
http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html


Contribution : http://elisa.dyndns-web.com/progra/MPI

I make a contribution on MPI is run a sample MPI program in my next contribution will explain in detail

lunes, 6 de febrero de 2012

High-performance computing (HPC)

High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second. The term HPC is occasionally used as a synonym for supercomputing, although

technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. Some supercomputers work at more than a petaflop or 1015 floating-point operations per second.
The most common users of HPC systems are scientific researchers, engineers and academic institutions. Some government agencies, particularly the military, also rely on HPC for complex applications. High-performance systems often use custom-made components in addition to so-called commodity components. As demand for processing power and speed grows, HPC will likely interest businesses of all sizes, particularly for transaction processing and data warehouses. An occasional techno-fiends might use an HPC system to satisfy an exceptional desire for advanced technology.

The term High Performance Computing (HPC) was originally used to describe powerful, number crunching supercomputers. As the range of applications for HPC has grown, however, the definition has evolved to include systems with any combination of accelerated computing capacity, superior data throughput, and the ability to aggregate substantial distributed computing power. 

The architectures of HPC systems have also evolved over time, as illustrated in Figure 1. Ten years ago, symmetric multiprocessing (SMP) and massively parallel processing (MPP) systems were the most common architectures for high performance computing.  More recently, however, the popularity of these architectures has decreased with the emergence of a more costeffective approach: cluster computing. According to the Top500 Supercomputer Sites project, the cluster architecture is now the most commonly used by the world’s highest performing computer systems.


 Figure 1.  The history of HPC architectures shows a shift toward cluster computing.
Source: Top500.org

Bibliography
http://en.wikipedia.org/wiki/High-performance_computing
http://www.xtremedatainc.com/pdf/FPGA_Acceleration_in_HPC.pdf
http://hpc.fs.uni-lj.si/sites/default/files/HPC_for_dummies.pdf