3 minutes taiga server
EnglishLinuxTutorials
Launch any serveur (here I use debian)
The objective is to run taiga into docker containers :
– database container with official postgres image
– taiga backend
– taiga front end
Keep reading
Launch any serveur (here I use debian)
The objective is to run taiga into docker containers :
– database container with official postgres image
– taiga backend
– taiga front end
Keep reading
So much work and so few time.. I won’t have the time to explain, so it’s just a post for keeping a trace of my install scripts for Cloudera on CentOS 6
mkdir /opt/quidquid mkdir /opt/quidquid/PROGS yum install -y nmap wget apr apr-devel apr-util apr-util-devel libxml pcre pcre-devel gcc openssl-devel cd ~ wget -c http://apache.crihan.fr/dist/httpd/httpd-2.2.27.tar.gz tar zxf httpd-2.2.27.tar.gz cd httpd-2.2.27 ./configure --prefix=/opt/apache-2.2.27 --enable-so --enable-ssl --enable-ssl=shared --enable-rewrite --enable-rewrite=shared --with-z=/usr make make install ln -s /opt/apache-2.2.27/ /opt/apache cd ~ rm -Rf ~/httpd-2.* ll vi /opt/apache/conf/httpd.conf groupadd www useradd -g www www cat > /etc/init.d/httpd << "EOF" . /etc/rc.d/init.d/functions RETVAL=$? APACHEHOME="/opt/apache" case "$1" in start) echo -n "Starting httpd: " daemon $APACHEHOME/bin/httpd echo touch /var/lock/subsys/httpd ;; stop) echo -n "Shutting down http: " killproc httpd echo rm -f /var/lock/subsys/httpd rm -f /var/run/httpd.pid ;; status) status httpd ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading httpd: " killproc httpd -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 EOF chmod 700 /etc/init.d/httpd /etc/init.d/httpd start /etc/init.d/httpd stop /sbin/chkconfig --level 3 httpd on /sbin/chkconfig --level 06 httpd off mkdir -p /home/www/html mkdir -p /home/www/cgi-bin mkdir -p /home/www/html/CLOUDSME cat > /home/www/html/robots.txt << "EOF" User-agent: * Disallow: / EOF cat > /home/www/html/index.html << "EOF" Bonjour ! EOF chgrp -R www /home/www chmod -R 775 /home/www cp /opt/apache/conf/httpd.conf /opt/apache/conf/httpd.old vi /opt/apache/conf/httpd.conf Modifier les lignes suivantes pour correspondre à nos besoins : - User www - Group www - ServerName 192.168.2.42:80 - Listen 80 - DocumentRoot “/home/www/html” - /home/www/html”> - ScriptAlias /cgi-bin/ “/home/www/cgi-bin/” - /home/www/cgi-bin”> /etc/init.d/httpd start iptables -P INPUT ACCEPT iptables -F iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i eth0 -p icmp -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -L /sbin/service iptables save
——————————
—- CLOUDERA INSTALLATION
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Quick-Start/cdh4qs_topic_3_3.html
cd ~/ wget -c http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm yum install hadoop-conf-pseudo
Installer Java :
cd /opt/ wget -c https://blog.quidquid.fr/jdk/jdk-7u51-linux-x64.tar.gz tar zxf /opt/jdk-7u51-linux-x64.tar.gz chown -R root:root /opt/jdk1.7.0_51 ln -s /opt/jdk1.7.0_51 /opt/jdk [bash] Ajout de java_home dans les variables d’environnement. [bash] cat >> ~/.bashrc << "EOF" # ------------------------- export JAVA_HOME=/opt/jdk PATH=$PATH:$JAVA_HOME/bin EOF cat >> /etc/bashrc << "EOF" # ------------------------- export JAVA_HOME=/opt/jdk PATH=$PATH:$JAVA_HOME/bin EOF [/bash] Ajouter VMCLOUDERA à la fin de chaque lignes [bash]vi /etc/hosts[/bash] [bash] sudoedit /etc/sudoers and add : hdfs ALL=(ALL) ALL [/bash] Se connecter en tant que hdfs [bash] su - hdfs hdfs namenode -format exit vi /etc/hadoop/conf.pseudo/hadoop-env.sh export JAVA_HOME=/opt/jdk [/bash] Tout démarrer : [bash] for x in <code>cd /etc/init.d ; ls hadoop-hdfs-*</code> ; do sudo service $x start ; done sudo service hadoop-hdfs-namenode start sudo service hadoop-hdfs-secondarynamenode start sudo service hadoop-hdfs-datanode start [bash] 3. Optional: Start services on boot [bash] sudo chkconfig hadoop-hdfs-namenode on sudo chkconfig hadoop-hdfs-secondarynamenode on sudo chkconfig hadoop-hdfs-datanode on
Step 3: Create the /tmp Directory
Remove the old /tmp if it exists:
sudo -u hdfs hadoop fs -rm -r /tmp
Create a new /tmp directory and set permissions:
sudo -u hdfs hadoop fs -mkdir /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
Step 4: Create Staging and Log Directories
Create the staging directory and set permissions:
sudo -u hdfs hadoop fs -mkdir /tmp/hadoop-yarn/staging
sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging
Create the done_intermediate directory under the staging directory and set permissions:
sudo -u hdfs hadoop fs -mkdir /tmp/hadoop-yarn/staging/history/done_intermediate
sudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging/history/done_intermediate
Change ownership on the staging directory and subdirectory:
sudo -u hdfs hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/staging
Create the /var/log/hadoop-yarn directory and set ownership:
sudo -u hdfs hadoop fs -mkdir /var/log/hadoop-yarn
sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
Step 5: Verify the HDFS File Structure:
Run the following command:
$ sudo -u hdfs hadoop fs -ls -R /
You should see the following directory structure:
drwxrwxrwt – hdfs supergroup 0 2014-04-25 11:29 /tmp
drwxr-xr-x – hdfs supergroup 0 2014-04-25 11:29 /tmp/hadoop-yarn
drwxrwxrwt – mapred mapred 0 2014-04-25 11:30 /tmp/hadoop-yarn/staging
drwxr-xr-x – mapred mapred 0 2014-04-25 11:30 /tmp/hadoop-yarn/staging/history
drwxrwxrwt – mapred mapred 0 2014-04-25 11:30 /tmp/hadoop-yarn/staging/history/done_intermediate
drwxr-xr-x – hdfs supergroup 0 2014-04-25 11:33 /var
drwxr-xr-x – hdfs supergroup 0 2014-04-25 10:53 /var/lib
drwxr-xr-x – hdfs supergroup 0 2014-04-25 11:33 /var/log
drwxr-xr-x – yarn mapred 0 2014-04-25 11:33 /var/log/hadoop-yarn
Step 6: Start YARN
sudo service hadoop-yarn-resourcemanager start
sudo service hadoop-yarn-nodemanager start
sudo service hadoop-mapreduce-historyserver start
sudo chkconfig hadoop-yarn-resourcemanager on
sudo chkconfig hadoop-yarn-nodemanager on
sudo chkconfig hadoop-mapreduce-historyserver on
sudo -u hdfs hadoop fs -mkdir /user
sudo -u hdfs hadoop fs -mkdir /user/clouduser
sudo -u hdfs hadoop fs -chown clouduser /user/clouduser
Testing everything is ok
useradd -g users clouduser
passwd clouduser
!clouduser!CLOUDERA!
su – clouduser
hadoop fs -mkdir input
hadoop fs -put /etc/hadoop/conf/*.xml input
hadoop fs -ls input
Found 3 items:
-rw-r–r– 1 clouduser users 1348 2014-04-25 11:42 input/core-site.xml
-rw-r–r– 1 clouduser users 1913 2014-04-25 11:42 input/hdfs-site.xml
-rw-r–r– 1 clouduser users 1001 2014-04-25 11:42 input/mapred-site.xml
Set HADOOP_MAPRED_HOME for user joe:
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
Run an example Hadoop job to grep with a regular expression in your input data.
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 ‘dfs[a-z.]+’
After the job completes, you can find the output in the HDFS directory named output23 because you specified that output directory to Hadoop.
$ hadoop fs -ls Found 2 items
drwxr-xr-x – clouduser users 0 2014-04-25 11:45 /user/clouduser/input
drwxr-xr-x – clouduser users 0 2014-04-25 11:45 /user/clouduser/output23
You can see that there is a new directory called output23.
List the output files.
$ hadoop fs -ls output23 Found 2 items
drwxr-xr-x – clouduser users 0 2014-04-25 11:45 /user/joe/output23/_SUCCESS
-rw-r–r– 1 clouduser users 1068 2014-04-25 11:45 /user/joe/output23/part-r-00000
Read the results in the output file.
hadoop fs -cat output23/part-r-00000 | head
1 dfs.safemode.min.datanodes
1 dfs.safemode.extension
1 dfs.replication
1 dfs.permissions.enabled
1 dfs.namenode.name.dir
1 dfs.namenode.checkpoint.dir
1 dfs.datanode.data.dir
iptables -A INPUT -p tcp –dport 631 -j ACCEPT
iptables -A INPUT -p tcp –dport 8031 -j ACCEPT
iptables -A INPUT -p tcp –dport 8042 -j ACCEPT
iptables -A INPUT -p tcp –dport 8080 -j ACCEPT
iptables -A INPUT -p tcp –dport 8088 -j ACCEPT
/sbin/service iptables save
NetInstall minimal puis :
yum -y update yum -y install wget openssh-clients nmap
Ajoutons notre hostname à la fin du fichier host
vi /etc/hosts
Installation du jdk Keep reading
Installation de Haddop 0.23.10 en single node cluster sur CentOS 6
Installation minimale d’une CentOS 6 NetInstall
Une fois installé, mettre à jour les packages et en installer d’autres :
yum -y update yum -y install wget openssh openssh-clients nmap
Install a debian server
apt-get update apt-get install -y subversion mkdir /srv/SVN cat > /etc/init.d/svnserve << "EOF" #!/bin/sh do_start () { svnserve -d -r /srv/SVN --pid-file /var/run/svnserve.pid } do_stop () { start-stop-daemon --stop --quiet --pidfile /var/run/svnserve.pid } case "$1" in start) do_start ;; stop) do_stop exit $? ;; restart) do_stop sleep 1s do_start ;; reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; *) echo "Usage: $0 start|stop|restart" >&2 exit 3 ;; esac EOF chmod +x /etc/init.d/svnserve update-rc.d svnserve defaults
Create a repo
svnadmin create /srv/SVN/repo
Assign access
vi /srv/SVN/repo/conf/svnserve.conf anon-access = none auth-access = write password-db = passwd
vi /srv/SVN/repo/conf/passwd [users] login = password
Now start server : /etc/init.d/svnserve start
Done !
To connect, checkout :
svn://<ip adresse>/repo
The end
Voldemort est une base de données NoSQL, un système de stockage distribué fonctionnant par clef/valeur… en très très gros c’est une hashmap (pour ceux qui viennent du monde Java).
Les URLs des différentes ressources :
– http://project-voldemort.com/
– https://github.com/voldemort/voldemort/downloads
– https://github.com/voldemort/voldemort/wiki
Installation CentOS 6 Net install x86_64 en “Basic server”. Keep reading
Voldemort is a NoSQL database, a distributed key-value storage system… a kind of hashmap (for java guys…).
URL resources:
– http://project-voldemort.com/
– https://github.com/voldemort/voldemort/downloads
– https://github.com/voldemort/voldemort/wiki
Your base server can be for instance a CentOS 6 Net install installation in “Basic server” mode. Keep reading
Dans cet article nous allons voir comment mettre en place un serveur de carte.
Sur base de net install x64 de CentOS 6, installation de type “Basic server” tout par défaut.
Pour l’URL utilisez celle-là par exemple : http://mirror.centos.org/centos/6/os/x86_64
(une image virtual box est disponible ici… a venir…)
Les différentes ressources documentaires à l’origine de ce post :
– http://wiki.openstreetmap.org/wiki/Osm2pgsql
– http://weait.com/content/build-your-own-openstreetmap-server
– http://www.surfaces.co.il/?p=880
– http://ian01.geog.psu.edu/geoserver_docs/data/openstreetmap/osm.html
On commence par mettre à jour le système : Keep reading
A small post in order to keep this somewhere.
In order to create an automatic install through kickstart, the steps are :
– Create a kickstart file (manually or via this tool : http://www.centos.org/docs/5/html/Installation_Guide-en-US/ch-redhat-config-kickstart.html) here is mine (I changed it to hide stuff) : Keep reading
Un petit post histoire de garder une trace quelque part.
Pour obtenir une installation automatique avec un fichier kickstart, voici les étapes :
– Create a kickstart file (manually or via this tool : http://www.centos.org/docs/5/html/Installation_Guide-en-US/ch-redhat-config-kickstart.html) voici le miens (je l’ai partiellement changé pour caché 2 ou 3 trucs) : Keep reading