Pages

Subscribe:

Ads 468x60px

Labels

  • class (7)
  • Lab (9)
Mostrando entradas con la etiqueta class. Mostrar todas las entradas
Mostrando entradas con la etiqueta class. Mostrar todas las entradas

miércoles, 22 de febrero de 2012

Contributions

For this week, I was researching with my partner Juan Carlos  about Beowulf cluster and MPI systems. So we made two blog entries where we share the gathered information.
Also, we implement the John The Ripper application, MD5 passwords are attacked with brute force and dictionary attacks

For the next week, we are beggining to research about Parallel CUDA, so Juan Carlos and me we are working in the construction of a GPU cluster for the next week, some examples and maybe a live execution.

NOMINATIONS:




lunes, 20 de febrero de 2012

Reporte de Actividades [2]

Esta semana realice unas pruebas con el cluster se corrieron algunos Códigos  para verificar como respondía el Cluster y para ver el rendimiento al ejecutar varios procesos y cuanto tiempo tardaba en realizar cálculos y terminar procesos

Se descifro contraseñas por fuerza bruta y Ataques por diccionario de Algunas contraseñas Encriptadas por MD5 usando Jhon The Ripper MPI , un Programa de Criptografia paralelizable.

Escuche algunos compañeros que decían que no se podía hacer un Cluster Con una Live USB deUbuntu y empece a realizar pruebas y es verdad no se pueden instalar los paquetes y busque una solución solo hay que modificar los orígenes del Software y listo se pueden descargar los paquetes necesarios y otro problema es que hay que cambiarle el hostname cada vez que se inicia ya que cuando se reinicio se cambia al nombre del equipo  default y se borran los hostnames de los nodos en /etc/hosts la próxima semana buscare una solución a esto.

Añado capturas de pantallas de los resultados y del Cluster.






Resultados de algunos Códigos:

Números Primos Con 6 procesos


Hola mundo


Calculo de Pi


Calculo de Pi con 30 procesos




Decifrando MD5






Para modificar la Live usb tiene que estar asi como en la imagen 



miércoles, 8 de febrero de 2012

How to Compile the Cluster[Contribution]


GNU compilers
The GNU compilers for C, C + + and Fortran 90 are installed on the cluster. To compile a program can use the following commands depending on the programming language used :

C
 gcc programa.c -o ejecutable

C++
g++ programa.c -o ejecutable  

Fortran90 
 gfortran programa.c -o ejecutable 


Intel compilers
C
 icc programa.c -o ejecutable
C++
icc programa.c -o ejecutable
Fortran77
ifort programa.c -o ejecutable
Fortran90
ifort programa.c -o ejecutable

OpenMPI
C
mpicc programa.c -o nombreEjecutable
C++
mpic++ programa.cpp -o nombreEjecutable
Fortran90
 mpif90 programa.f90 -o nombreEjecutable
 
Intel OpenMPI
By default GNU compilers are used together with MPI. To use Intel compilers then you must run the following commands for the different cases. This must be done in each console to open in the cluster. If you want to always use the Intel compilers then you must add these lines in the file. Bashrc located in your home directory 
C
export OMPI_MPICC=icc
c++
export OMPI_MPICXX=icc

http://m.web.ua.es/es/cluster-iuii/bibliotecas/librerias-paralelas.html 

contribution for this week
http://elisa.dyndns-web.com/progra/Compile
It shows how many libraries are compiled as is also different lenjuages

jueves, 2 de febrero de 2012

Contribution

An important point to bear in mind are the concepts of what a

Beowulf cluster structure and definition.

I leave the link support for the project.


http://elisa.dyndns-web.com/progra/Beowulf
 
 
 
I also consider the aportation of our partner Juan Carlos a good aportation for our project.

miércoles, 1 de febrero de 2012

Structuring a beowulf cluster

Beowulf is a technology to group computers based on the system Linux operating to form a virtual supercomputer parallel. In 1994 under the sponsorship ESS project of the Center for Excellence in Science Data and Information Space (CESDIS), Thomas Sterling and Don Becker created the first  Beowulf cluster for research purposes.
The following describes the hardware and software components that make up a
Beowulf cluster.
Beowulf has a multicomputer based architecture which can be used for parallel computing. This system consists of a master node and one or more nodes slaves connected via an Ethernet network or other network topology. It is built with common hardware components on the market, like any PC capable of running Linux, Ethernet adapters and switches standards. Since it contains elements special is completely reproducible. One of the main differences between Beowulf and a cluster of workstations (COWs cluster of workstations) is the fact that Beowulf behaves more like a single machine that many workstations. In most cases the slave nodes do not have monitors or keyboards and are accessed only via remote terminal or serial. The master node controls the whole cluster and serves files systems to slave nodes. It is also the cluster's console and the connection to the outside world. The Large Beowulf machines might have more than one master node and other nodes devoted to various specific tasks, such as consoles or workstations  supervision. In most cases the slave nodes of a Beowulf are simple stations. Nodes are configured and controlled by the master node, and make only what prompted it. In a configuration of diskless slaves, these do not even know your IP address until the teacher tells them what it is.


Software



Beowulf operating system using any Linux distribution. It also uses message passing libraries such as PVM (Parallel Virtual Machine), MPI (Message Pasing Interface). In the beginning, Beowulf used the Slackware Linux distribution, Now most of the cluster distribution has migrated to Red Hat for its easy system administration. Without a doubt, the cluster presents an important alternative for many problems particular, not only its economy but also because they can be designed and adjusted for specific applications.
the alternatives for managing the resources of a Beowulf cluster is MOSIX. Mosix is a tool developed for UNIX-like systems whose characteristic stands out is the use of shared algorithms, which are designed to respond to instantly to changes in available resources, making the effective balancing
Mosix3 Using a cluster of PC's is that it works in such a way that the nodes function as parts of a single computer. The main objective of this tool is distribute the load generated by sequential or parallelized applications. A load balancing approach is performed by users when allocating  different processes of a parallel to each node, having made a previous review manually the state of these nodes. Packages generally used for such work are PVM and MPI. This type of software, has tools for the initial allocation of processes to each node, regardless of existing load on the same or the availability of free memory each. these packages run at the user level as ordinary applications, ie they are unable to activate other resources or to distribute the workload in the cluster dynamically. the Most of the time the user is responsible for the management of resources in the nodes and manual execution of the distribution or migration programs.
Unlike these packages, Mosix performs automatic resource location available global migration and dynamic run "on line" 'of processes or programs to ensure the maximization of each node.

Beowulf Linux Cluster



starting
Beowulf Clusters are scalable performance clusters based on commodity hardware, on a private system network, with open source software (Linux) infrastructure.

Each consists of a cluster of PCs or workstations dedicated to running high-performance computing tasks. The nodes in the cluster don't sit on people's desks; they are dedicated to running cluster jobs. It is usually connected to the outside world through only a single node.

What are Beowulf systems being used ? 

Traditional technical applications such as simulations, biotechnology, and petro-clusters; financial market modeling, data mining and stream processing; and Internet servers for audio and games.

One question that is commonly enough asked on the beowulf list is "How hard is it to build or care for a beowulf?"
Mind you, it is quite possible to go into beowulfery with no more than a limited understanding of networking, a handful of machines (or better, a pocketful of money) and a willingness to learn, and over the years I've watched and sometimes helped as many groups and individuals (including myself) in many places went from a state of near-total ignorance to a fair degree of expertise on little more than guts and effort.
However, this sort of school is the school of hard (and expensive!) knocks; one ought to be able to do better and not make the same mistakes and reinvent the same wheels over and over again, and this book is an effort to smooth the way so that you can.
One place that this question is often asked is in the context of trying to figure out the human costs of beowulf construction or maintenance, especially if your first cluster will be a big one and has to be right the first time. After all, building a cluster of more than 16 or so nodes is an increasingly serious proposition. It may well be that beowulfs are ten times cheaper than a piece of "big iron'' of equivalent power (per unit of aggregate compute power by some measure), but what if it costs ten times as much in human labor to build or run? What if it uses more power or cooling? What if it needs more expensive physical infrastructure of any sort?
These are all very valid concerns, especially in a shop with limited human resources or with little linux expertise or limited space, cooling, power. Building a cluster with four nodes, eight nodes, perhaps even sixteen nodes can often be done so cheaply that it seems ''free'' because the opportunity cost for the resources required are so minimal and the benefits so much greater than the costs. Building a cluster of 256 nodes without thinking hard about cost issues, infrastructure, and cost-benefit analysis is very likely to have a very sad outcome, the least of which is that the person responsible will likely lose their job.
If that person (who will be responsible) is you, then by all means read on. I cannot guarantee that the following sections will keep you out of the unemployment line, but I'll do my best.


 

lunes, 16 de enero de 2012

What is a Cluster ?

Cluster

Computer cluster is called a group of computers working toward a common goal. These computers clustered hardware, networking communication and software to work together as if they were a single system. There are many attractive reasons for these groups, but the main one isable to perform the information processing more efficiently and quickly as if it were a single system. Generally, a cluster works on alocal area network (LAN) and allows for efficient communication, although the machines are in an area close physical. A greater understanding of the concept is called the grid, where the goal is the same, but involves clusters of computers connected by wide area networks (WAN). Some authors consider the grid as a cluster of clusters in a sense 'global'. While more and more technology and costs allow these approaches, efforts and complexity of using tens or hundreds(sometimes thousands) is very large. without However, the advantages in computation time makes even so, this type of solutions for high performance computing (HPC, high performance computing) are considered very attractive and constantly evolving.


Simply put, cluster is a group of multiple computers connected by a network of the common desktop. Clusters are usually employed to improve performance and / or the availability of individual computers of comparable speed and availability.

A cluster is expected to present combinations of the following services:

● High Performance :A high performance cluster is a set of computers that is designed to give    high performance in terms of computing power.

● High Availability :A set of two or more machines that are characterized by having a series of shared services and by constantly monitoring each other.

● Load Balancing:A load balancing cluster or adaptive computing is composed of one or more computers (called nodes) that act as cluster frontend, and  involved in distributing the service requests received by the cluster, other cluster computers that form the back-end of it.

● Scalability :Scalability is the desirable property of a system, a network or a process that indicates its ability to either handle the continued growth of job smoothly, or to be prepared to get bigger without losing quality of services offered 

The construction of the computers of the cluster is easier and cheaper because of its flexibility: they can all have the same hardware configuration and operating system (Homogeneous cluster), but yield different architectures and operating systems heterogeneous., making it easier and cheaper construction.

it is necessary to provide a cluster management system, which is responsible for

Classification of the Clusters 
The term cluster has different connotations for different groups of persons. The types of clusters, established on the basis of the use that of to the clusters and the services that offer, they determine the meaning of the term for the group that uses it. The clusters can qualify with base in his characteristics. They can be had clusters of high performance (HPC - High Performance Clusters), clusters of high availability (THERE IS - High Availability) or clusters of high efficiency (HT - High Throughput).

High Performance (HPC): They are clusters in which there are executed tasks that they need of great computational capacity, big quantities of memory, or both simultaneously. To carry out these tasks can compromise the resources of the cluster for lengths To carry out these tasks can compromise the resources of the cluster for long periods of time.

High Availability (HA) clusters are intended design is to provide availability and reliability. Theseclusters are trying to provide maximum availability of the services they offer. The reliability is provided by software that detects failures and allows them recover from, while in hardwareprevents having a single point of failure.

High Efficiency (HT): These are clusters of design aim is to run as many tasks in the shortest time possible. There is data independence between individual tasks. The delay between cluster nodes is not considered a big problem.