Thursday, November 9, 2017

Docker ELK Stack and the geo ip plugin

https://docs.docker.com/compose/gettingstarted/#step-3-define-services-in-a-compose-file

http://elk-docker.readthedocs.io/#running-with-docker-compose

# https://elk-docker.readthedocs.io/#installation
sudo docker pull sebp/elk
docker images

# https://elk-docker.readthedocs.io/#usage
sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

# or setup a yml file
# create an entry for the ELK Docker image by adding the following lines to
# your docker-compose.yml file:
 
elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"
You can then start the ELK container like this:
$ sudo docker-compose up elk


# follow the instructions to inject a log msg to log stash

# inject the msg

# in a browser view the injected msg

http://192.168.1.155:9200/_search?pretty

http://192.168.1.155:5601/app/kibana#/management/kibana/index?_g=()

# use the container id from the docker ps and stop the container

docker stop fce12628893c

docker stop
# Lets now build a elk-docker image using a git clone

cd

git clone https://github.com/spujadas/elk-docker

http://elk-docker.readthedocs.io/#building-image

https://stackoverflow.com/questions/36617904/extending-local-dockerfile

# build the cloned docker image


~/elk-docker$ docker build -t elk-docker

# now create the second docker file which will inject the geo ip plugin

Dockerfile like the following will extend the base image and install the GeoIP processor plugin(which adds information about the geographical location of IP addresses):
FROM sebp/elk

ENV ES_HOME /opt/elasticsearch
WORKDIR ${ES_HOME}

RUN CONF_DIR=/etc/elasticsearch gosu elasticsearch bin/elasticsearch-plugin \
    install ingest-geoip
You can now build the new image (see the Building the image section above) and run the container in the same way as you did with the base image.
~$ mkdir elk-docker-geoip
~$ cd !$
cd elk-docker-geoip
~/elk-docker-geoip$ vi Dockerfile

FROM sebp/elk ENV ES_HOME /opt/elasticsearch WORKDIR ${ES_HOME} RUN CONF_DIR=/etc/elasticsearch gosu elasticsearch bin/elasticsearch-plugin \ install ingest-geoip

~/elk-docker-geoip$ docker build -t elk-docker .

Tuesday, November 7, 2017

Spark Scala Cassandra intro

# Assumes that you have previously installed Oracle Java 8
# Use the java install instructions at this page if you have not already installed Java 8



# Install Cassandra


echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
sudo apt-get update
sudo apt-get install cassandra

https://www.datastax.com/dev/blog/kindling-an-introduction-to-spark-with-cassandra-part-1

# build the 2.11 Spark compatible spark-cassandra-connector library
$ sbt/sbt -Dscala-2.11=true assembly

# Assuming that you git cloned the Spark Cassandra connector code  in $HOME and did an set build and spark-shell is in the path.

# Start cassandra

# To start the Apache Cassandra service on your server, you can use the following command:

sudo systemctl start cassandra.service

# To stop the service, you can use the command below:

sudo systemctl stop cassandra.service

# If the service is not already enabled on system boot, you can enable it by using the command below:

sudo systemctl enable cassandra.service
# Add a key space and table for the tutorial in the Cassandra shell : cqlsh


$ cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.

cqlsh> CREATE KEYSPACE test_spark WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };

cqlsh> CREATE TABLE test_spark.test (value int PRIMARY KEY);

cqlsh:test_spark> INSERT INTO test_spark.test (value) VALUES (1);

# In another shell start spark-shell

$ cd

$ spark-shell

scala> sc.parallelize( 1 to 50 ).sum()

sc.parallelize( 1 to 50 ).sum()

res1: Double = 1275.0

scala> CNTL-D # to exit

# restart with the Cassandra connector jar

$ spark-shell --jars ~/spark-cassandra-connector/spark-cassandra-connector/target/full/scala-2.11/spark-cassandra-connector-assembly-2.0.5-70-g2ee41fc.jar


scala> import com.datastax.spark.connector._, org.apache.spark.SparkContext, org.apache.spark.SparkContext._, org.apache.spark.SparkConf

scala> val conf = new SparkConf(true).set("spark.cassandra.connection.host","localhost")

scala> val sc = new SparkContext(conf)

scala> val test_spark_rdd = sc.cassandraTable("test_spark", "test")

test_spark_rdd: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[4] at RDD at CassandraRDD.scala:16
scala> val data = sc.cassandraTable("my_keyspace", "my_table")
data: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[5] at RDD at CassandraRDD.scala:16
#########
# Start the movie tutorial

# Make the key space and table for movies

cqlsh:test_spark> USE test_spark; 

cqlsh:test_spark> CREATE KEYSPACE spark_demo WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };
cqlsh:test_spark> USE spark_demo;

cqlsh:spark_demo> CREATE TABLE spark_demo.movies (id int PRIMARY KEY, title text, genres text); 

cqlsh:spark_demo> describe table movies;


cqlsh:spark_demo> INSERT INTO spark_demo.movies (id, title, genres) VALUES (1, 'Bladerunner', 'Scifi');




cqlsh:spark_demo> INSERT INTO spark_demo.movies (id, title, genres) VALUES (2, 'The Big Short', 'Finance');


cqlsh:spark_demo> SELECT * FROM spark_demo.movies  ;

 id | genres  | title
----+---------+---------------
  1 |   Scifi |   Bladerunner
  2 | Finance | The Big Short

(2 rows)

# Spark code for movies
import com.datastax.spark.connector._, org.apache.spark.SparkContext, org.apache.spark.SparkContext._, org.apache.spark.SparkConf

val conf = new SparkConf(true).set("spark.cassandra.connection.host","localhost")


val data = sc.cassandraTable("sparc_demo", "movies")
case class Movie(Id: Int, Title: String, Genres: String)

val data = sc.cassandraTable[Movie]("spark_demo", "movies")

data.foreach(println)

#output
Movie(1,Bladerunner,Scifi)
Movie(2,The Big Short,Finance)



Sunday, November 5, 2017

Scala schema code generation

SCHEMA CODE GENERATION

The Slick code generator is a convenient tool for working with an existing or evolving database schema. It can be run stand-alone or integrated into you sbt build for creating all code Slick needs to work.


http://slick.lightbend.com/doc/3.0.0/code-generation.html

Wednesday, October 18, 2017

Tensorflow GPU expects Nvidia Driver 387

Remove the current Nvidia driver and install Nvidia driver version 387 



$ python -c "import tensorflow as tf; sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))"
Traceback (most recent call last):
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libnvidia-fatbinaryloader.so.387.12: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 1, in
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in
    from tensorflow.python import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in
    from tensorflow.python import pywrap_tensorflow
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libnvidia-fatbinaryloader.so.387.12: cannot open shared object file: No such file or directory


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

for some common reasons and solutions.  Include the entire stack trace

above this error message when asking for help.

Tensorflow GPU expects cuDNN v6.0

The fix for the error below is to install cuDNN v6.0.

 python -c "import tensorflow as tf; sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))"
Traceback (most recent call last):
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 1, in
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in
    from tensorflow.python import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in
    from tensorflow.python import pywrap_tensorflow
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "/home/depappas/python_virtual_env/deep_learning_3.5/lib/python3.5/imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

for some common reasons and solutions.  Include the entire stack trace

above this error message when asking for help.

Ubuntu 16.04 Nvidia GPU Driver, CUDA, and CUDNN setup for Tensorflow

Installation instructions for the Nvidia driver, cuda, and cudnn 
Compatible with Tensoflow running on a GPU.

Uses:
Ubuntu 16.04 (Nvidia cuda 8.0 supports 14.04 and 16.04)
Python 3.5
Tensorflow 1.0 (requires cuda 8.0 and cudnn 5.1)
Cuda 8.0
Cudnn 5.1
Keras 2.0
Nvidia driver 387
Cuda 387
Cudnn
Tensorflow has Nvidia driver and lib deps


I used the following commands for CUDA installation. Lets make sure that the driver version and the CUDA versions are the same. You cannot see the CUDA version in the deb package name but you can see the CUDA version in the run file name. If you want to use the deb file first select the run file and get the version so that you know what driver version to install.

ssh to the machine with the GPU and execute the following commands.


################################################################################
# From https://devtalk.nvidia.com/default/topic/876432/no-cuda-capable-device-is-detected/
# Clean the Nvidia drivers

################################################################################
sudo apt-get purge nvidia*


################################################################################
# Install the Nvidia driver
# Version 387 in this post
################################################################################

sudo add-apt-repository ppa:graphics-drivers

# If you want to use the system settings to do the driver installation
sudo apt-get install xserver-xorg-video-nouveau 

sudo reboot

# For the driver version select 387 in the system settings
# and then the software and updates additional drivers installation
# select the driver that you want to use



sudo reboot

# check the driver versions that are installed.
sudo dpkg --list | grep nvidia

#check that the driver is working (we will check it again after installing cuda to make sure that there is not a driver/cuda conflict

nvidia-smi

Wed Oct 18 19:07:04 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   32C    P8     6W / 200W |     64MiB /  6072MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1139      G   /usr/lib/xorg/Xorg                            61MiB |
+-----------------------------------------------------------------------------+

################################################################################

# Cuda installation
# install version 384
# Get the deb package link from Nvidia
#Browse to https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=debnetwork
# Select the version you want that matches the Nvidia driver version that you installed
################################################################################

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.0.176-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_9.0.176-1_amd64.deb
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-8-0

# check that the driver is still working 

nvidia-smi

Wed Oct 18 19:07:04 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   32C    P8     6W / 200W |     64MiB /  6072MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1139      G   /usr/lib/xorg/Xorg                            61MiB |
+-----------------------------------------------------------------------------+

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61


################################################################################

# CUDNN Installation
# Download and install cuDNN v5.1 in a shell


################################################################################

cd /usr/local 
export CUDNN_VERSION=5.1
export CUDNN_TAR_FILE="cudnn-8.0-linux-x64-v${CUDNN_VERSION}.tgz"
sudo wget http://developer.download.nvidia.com/compute/redist/cudnn/v${CUDNN_VERSION}/${CUDNN_TAR_FILE}
sudo tar -xzvf ${CUDNN_TAR_FILE}
#sudo cp -fr cuda/* /usr/local/cudnn # modified gist
sudo chmod a+r /usr/local/cuda/lib64/*

# Install Tensorlfow using the instructions at the link below

http://programmingmatrix.blogspot.com/2017/04/clean-ubuntu-1604-tensorflow-10-cuda-80.html

SSH: keyless login on Ubuntu

https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/



# Here is what every post seems to omit...

# You need to specify the RAS key to use - this is the private version of the public key you copied to the server
ssh -i ~/.ssh/derek_rsa depappas@192.168.1.155

# make an alias

alias sshberlin='ssh -i ~/.ssh/joes_rsa joesmith@192.168.1.132'

New Deep Learning Ubuntu Setup

My custom setup fora new Ubuntu machine. I keep the bashrc and .emacs files in on a USB and copy them. I gave up on Dropbox on Ubuntu/Linux. Will need to try some other solution.

sudo apt-get -y install keepassx
mv .bashrc .bashrc.bak
ln -s ~/Desktop/deep_learning_pc_setup/.bashrc .
ln -s ~/Desktop/linux_home_dir_setup/.bashrc.include .
ln -s .bashrc .b
ln -s .bashrc.include .b.i
ln -s ~/Desktop/linux_home_dir_setup/.bashrc.alias .
ln -s .bashrc.alias .b.a
source ~/.b
sudo apt-get -y install emacs



Then follow these instructions.
http://programmingmatrix.blogspot.com/2017/04/clean-tensorflow-10-installation.html

Tuesday, October 17, 2017

Ubuntu: add a user to the sudoers file

# The instructions at the link below do not work:

https://www.digitalocean.com/community/tutorials/how-to-create-a-sudo-user-on-ubuntu-quickstart

# This works:

sudo visudo

joesmith ALL = (ALL) NOPASSWD: ALL

Install Google Chrome on Ubuntu

http://blog.kasunbg.org/2011/03/java-applets-in-google-chrome-under.html

I found that Google Chrome doesn't auto-detect Java applets in Linux. Following commands will link the Java applet plugin to Chrome. Enter these in Terminal. Make sure you put the correct path for the JRE in /usr/lib/jvm/<jdk1.6.0_22>/jre/lib/i386/libnpjp2.so.


sudo mkdir /opt/google/chrome/plugins
cd /opt/google/chrome/plugins
Now, link the plugin  libnpjp2 to /opt/google/chrome/plugins directory by running the following. Double check whether the JRE path is correct.
sudo ln -s /usr/lib/jvm/jdk1.6.0_22/jre/lib/i386/libnpjp2.so

Friday, October 13, 2017

Configuring Grub 2 on CentOS 7 to Dual Boot with Windows 7 reference

How to copy a CentOS ISO to USB on Mac OS X

https://wpguru.co.uk/2015/02/how-to-copy-a-centos-iso-to-usb-on-mac-os-x/



cd ~/Downloads


diskutil list

/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                  Apple_HFS Macintosh HD            999.3 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *512.1 GB   disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
   2:                  Apple_HFS Macintosh SSD           511.3 GB   disk1s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk1s3
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *1.0 TB     disk2
   1:                  Apple_HFS VM Drive               1.0 TB     disk2s1
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.5 TB     disk3
   1:                        EFI EFI                     209.7 MB   disk3s1
   2:                  Apple_HFS Black Time Machine      1.5 TB     disk3s2
/dev/disk4
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *4.0 GB     disk4
   1:                 DOS_FAT_32 C64

diskutil unmountDisk /dev/disk4

Unmount of all volumes on disk4 was successful
sudo dd if=[your ISO image] of=/dev/disk4

// time passes...

694272+0 records in
694272+0 records out
355467264 bytes transferred in 249.100402 secs (1427004 bytes/sec)