Tuesday, January 16, 2018

Handling Permissions with Docker Volumes -reference

https://denibertovic.com/posts/handling-permissions-with-docker-volumes/

The expression between the back ticks gets interpolated

$ id -u $USER
1000

Avatar




Thanks for the very useful article.
If one wants to bake-in the new user at build time then --build-args can leveraged.
Assuming current user (who's building the docker image) is 'bob' having UID=103
Goal is to create docker image having user 'jdoe' with same UID=103
================================================================
 $ cat Dockerfile
FROM centos:7 #OR whatever
ARG USER_ID
RUN useradd --shell /bin/bash -o --create-home --user-group -u $USER_ID  jdoe
================================================================
Build
------
$ docker build -t my-base-image --build-arg USER_ID=`id -u $USER` .
================================================================
Run
-----
$ docker run -it -u jdoe my-base-image
[jdoe@a1b2c3f4 /]$ id -u jdoe
103
================================================================

Avatar




Hi Deni,
I think this can (now) be solved without any additional scripts. Just mount your /etc/group and /etc/passwd readonly to your container like:
docker run -ti \
-v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro \
-u $( id -u $USER ):$( id -g $USER ) \
some-image:lastest bash
Notice also the usage of "id -g" and "id -u" which will also solve the group issue. This method has only one drawback: If any script or application tries to write-access /etc/group or /etc/passwd it will fail due to permissions. But at least for my use-cases I never ran into issues here.
Best regards and thanks for your ideas on that.
Basti.

Sunday, January 14, 2018

Ubuntu 16.04 stuck in login loop

https://askubuntu.com/questions/223501/ubuntu-gets-stuck-in-a-login-loop

My Ubuntu is stuck in a login loop when trying to enter my desktop. When I login, the screen gets black and soon after that the login screen comes back.



I encountered this exact problem and non of the suggested fixes above worked for me. After almost giving up I looked at the .xsession-errors and noticed I had a typo in my .profile (I had an extra } in the file after I edited it earlier in the day).
That was causing the login loop. It might be another place to look if the other suggested fixes don't work for you.


https://askubuntu.com/questions/801440/login-loop-badvalue-integer-parameter-out-of-range-for-operation-16-04

Adding +iglx to xserver-command in /usr/share/lightdm/lightdm.conf.d/50-xserver-command.conf.
[SeatDefaults] 
# Dump core 
xserver-command=X -core +iglx
After which you either reboot or Ctrl-Alt-F1, login, and sudo service lightdm restart.
more .xsession-errors
openConnection: connect: No such file or directory
cannot connect to brltty at :0
...


$ grep NVRM /var/log/kern.log
Jan 14 08:37:31 berlin kernel: [1351327.884905] NVRM: API mismatch: the client has the version 384.111, but
Jan 14 08:37:31 berlin kernel: [1351327.884905] NVRM: this kernel module has the version 384.98.  Please
Jan 14 08:37:31 berlin kernel: [1351327.884905] NVRM: make sure that this kernel module and all NVIDIA driver
Jan 14 08:37:31 berlin kernel: [1351327.884905] NVRM: components have the same version.
Jan 14 09:17:26 berlin kernel: [1353722.487659] NVRM: API mismatch: the client has the version 384.111, but
Jan 14 09:17:26 berlin kernel: [1353722.487659] NVRM: this kernel module has the version 384.98.  Please
Jan 14 09:17:26 berlin kernel: [1353722.487659] NVRM: make sure that this kernel module and all NVIDIA driver
Jan 14 09:17:26 berlin kernel: [1353722.487659] NVRM: components have the same version.

CHANGE THE DRIVER VERSION TO MATCH THE VERSION YOU INSTALLED 

I was able to resolve my issue by performing the following steps:

1. First reinstall the nvidia-375 repository graphic driver.
sudo apt-get install nvidia-375
During the installation, I noticed that nvidia-375.26 driver was being installed.

2. I moved the entire /lib/modules/4.4.0-64-generic/updates/dkms folder to my home directory backup folder. This was to make sure all old .ko files were removed.
cp -R /lib/modules/4.4.0-64-generic/updates/dkms ~/backup

3. I regenerated the .ko files for the installed driver using 
sudo dpkg-reconfigure nvidia-375
It created a new dkms folder with the relevant nvidia kernel modules. I also checked their version using command:
sudo modinfo nvidia_375.ko
sudo modinfo nvidia_375_drm.ko
sudo modinfo nvidia_375_modeset.ko
sudo modinfo nvidia_375_uvm.ko
Their output showed that they were for version 375.26.

4. I rebooted the system with 
sudo reboot

Adapted from https://devtalk.nvidia.com/default/topic/525877/linux/api-mismatch-means-ubuntu-can-39-t-boot-i-can-39-t-fix-i-please-help-/1

$ sudo dpkg-reconfigure nvidia-384
Removing all DKMS Modules
Done.
update-initramfs: deferring update (trigger activated)

A modprobe blacklist file has been created at /etc/modprobe.d to prevent Nouveau from loading. This can be reverted by deleting /etc/modprobe.d/nvidia-graphics-drivers.conf.
A new initrd image has also been created. To revert, please replace /boot/initrd-4.13.0-26-generic with /boot/initrd-$(uname -r)-backup.

*****************************************************************************
*** Reboot your computer and verify that the NVIDIA graphics driver can   ***
*** be loaded.                                                            ***
*****************************************************************************

INFO:Enable nvidia-384
DEBUG:Parsing /usr/share/ubuntu-drivers-common/quirks/put_your_quirks_here
DEBUG:Parsing /usr/share/ubuntu-drivers-common/quirks/lenovo_thinkpad
DEBUG:Parsing /usr/share/ubuntu-drivers-common/quirks/dell_latitude
Loading new nvidia-384-384.111 DKMS files...
Building only for 4.13.0-26-generic
Building for architecture x86_64
Building initial module for 4.13.0-26-generic
Done.

nvidia_384:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.13.0-26-generic/updates/dkms/

nvidia_384_modeset.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.13.0-26-generic/updates/dkms/

nvidia_384_drm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.13.0-26-generic/updates/dkms/

nvidia_384_uvm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.13.0-26-generic/updates/dkms/

depmod....

DKMS: install completed.
Processing triggers for initramfs-tools (0.122ubuntu8.10) ...
update-initramfs: Generating /boot/initrd.img-4.13.0-26-generic

$ sudo modinfo nvidia_384
filename:       /lib/modules/4.13.0-26-generic/updates/dkms/nvidia_384.ko
alias:          char-major-195-*
version:        384.111
supported:      external
license:        NVIDIA
srcversion:     EB07FB20BD3656BF1198872
alias:          pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias:          pci:v000010DEd*sv*sd*bc03sc02i00*
alias:          pci:v000010DEd*sv*sd*bc03sc00i00*
depends:        
name:           nvidia
vermagic:       4.13.0-26-generic SMP mod_unload 
parm:           NVreg_Mobile:int
parm:           NVreg_ResmanDebugLevel:int
parm:           NVreg_RmLogonRC:int
parm:           NVreg_ModifyDeviceFiles:int
parm:           NVreg_DeviceFileUID:int
parm:           NVreg_DeviceFileGID:int
parm:           NVreg_DeviceFileMode:int
parm:           NVreg_UpdateMemoryTypes:int
parm:           NVreg_InitializeSystemMemoryAllocations:int
parm:           NVreg_UsePageAttributeTable:int
parm:           NVreg_MapRegistersEarly:int
parm:           NVreg_RegisterForACPIEvents:int
parm:           NVreg_CheckPCIConfigSpace:int
parm:           NVreg_EnablePCIeGen3:int
parm:           NVreg_EnableMSI:int
parm:           NVreg_TCEBypassMode:int
parm:           NVreg_UseThreadedInterrupts:int
parm:           NVreg_EnableStreamMemOPs:int
parm:           NVreg_MemoryPoolSize:int
parm:           NVreg_RegistryDwords:charp
parm:           NVreg_RegistryDwordsPerDevice:charp
parm:           NVreg_RmMsg:charp
parm:           NVreg_AssignGpus:charp

Friday, January 12, 2018

Error (jedi): Failed to start Jedi EPC server.

Error (jedi): Failed to start Jedi EPC server.
*** You may need to run "M-x jedi:install-server". ***
This could solve the problem especially if you haven't run the command yet
since Jedi.el installation or update and if the server complains about
Python module imports.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://github.com/tkf/emacs-jedi/issues/140

Please explain why you don't understand jedi:install-server and why you think creating the directory solves the problem. It is VERY important to have a clear document so that we don't need to deal with questions.
Let me add more explanations.
Running jedi:install-server installs Python modules required by Jedi.el. You just need to type M-x jedi:install-server RET to install or update the Python modules. To make this command to work, you need an Emacs package called python-environment.el and Python command line program called virtualenv. We have jedi:install-server to solve dependencies outside of Emacs packaging system. This cannot be integrated with, for example with package-install command since Emacs packaging system does not allow us to install Python packages at the install time. The command jedi:install-server makes Python virtual environment in ~/.emacs.d/.python-environments/default/ and install all Python modules required by Jedi.el in it. This way, you don't need sudo or su to install anything and it is easy to install compatible Python packages.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 https://realpython.com/blog/python/emacs-the-best-python-editor/#installation

LightNet: Bringing pjreddie's DarkNet out of the shadows - reference

https://github.com/explosion/lightnet


sudo apt-get install libopenblas-dev 

conda create --name lightnet python=3 
 
pip install opencv-python
 
source activate lightnet

pip install pathlib

pip install numpy

pip install plac

pip install requests

pip install msgpack-python

pip install lightnet

python -m lightnet download tiny-yolo

python -m lightnet download yolo







Wednesday, January 10, 2018

Docker write in shared volumes docker : reference

https://stackoverflow.com/questions/29245216/write-in-shared-volumes-docker

https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes?rq=1

Dockerfile
FROM debian:jessie
# add our user and group first to make sure their IDs get assigned consistently, regardless of other deps added later
RUN groupadd -r app \
  && useradd -r -g app app
RUN mkdir -p /data/app \
  && chown -R app:app /data/app
VOLUME /data/app
USER app
CMD ["echo", "Data container for app"]


$ docker run -it volume_test bash
app@8839f8e5369e:/$ ls -ld /data
drwxr-xr-x 3 root root 4096 Jan 10 23:56 /data
app@8839f8e5369e:/$ ls -ld /data/app/
drwxr-xr-x 2 app app 4096 Jan 10 23:56 /data/app/
app@8839f8e5369e:/$ 

https://medium.com/@ramangupta/why-docker-data-containers-are-good-589b3c6c749e

https://docs.docker.com/engine/admin/volumes/volumes/#start-a-service-with-volumes

https://container42.com/2014/11/18/data-only-container-madness/

https://www.tecmint.com/install-run-and-delete-applications-inside-docker-containers/

https://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/ 

Tuesday, January 9, 2018

Actors vs Futures - reference

https://www.chrisstucchio.com/blog/2013/actors_vs_futures.html

https://www.reddit.com/r/scala/comments/5kk9bc/my_problems_with_akka/#bottom-comments

Before I respond to your post, I'd like to say the following:
Don't use Spray. Use Akka-HTTP. Spray is effectively obsolete. You want Akka-HTTP 10.x, which is just as fast and has Akka Streams built in. Streams uses Actors under the hood, but gives you an abstraction that lets you focus on the elements.
If you want to use Akka testing, you need akka-testkit, which is fairly low level. For streams, you want akka-stream-testkit, which is easier. http://doc.akka.io/docs/akka/2.4.9/scala/stream/stream-testkit.html
The example you're using is quite old as well -- for an example of an akka-http project using Guice and slick, see:
https://github.com/kgoralski/akka-http-slick-guice
Now to your points:
You want to use Actors when you want to hold state, and you want to use them when you need to handle failure -- that is, an exception from several child actors that manage database operations can propagate up to a parent actor which acts as a supervisor and can determine more complex failures and define a failure hierarchy. This is more difficult to manage than Future, which does have recover / Try etc but doesn't have the same supervisor capabilities. Read Jamie Allen's book for more details.
For a REST service with DB persistence layer, you should be using Slick with HikariDB, and size the thread pool according to http://slick.lightbend.com/doc/3.1.0/database.html#database-thread-pool -- you don't want to just throw 10 threads at a DB and hope for the best, you have to actually size it according to performance tests and work out the queueSize.
If you want HTTP inside a dependency injection framework using Akka with a testing framework and Guice DI and no complicated bits unless you want them, then you want Play. Play is all of those things put together.
Here are the example projects:
And there are more projects here: https://playframework.com/download
We're aiming to integrate Akka-HTTP as the default HTTP engine for Play 2.6.x, and there's an experimental version of the engine in 2.5.x, but for now the core engine is handled by Netty, with Akka Streams are used internally to communicate with Netty.

Sharing Data Between the Host and the Docker Container - reference

Nvidia-Docker and tensorflow Docker

 https://www.tensorflow.org/install/install_linux#InstallingDocker

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker

We recommend installing one of the latest versions. For example, the following command launches the latest TensorFlow GPU binary image in a Docker container from which you can run TensorFlow programs in a shell:
$ nvidia-docker run -it gcr.io/tensorflow/tensorflow:latest-gpu bash
 
The following command also launches the latest TensorFlow GPU binary image in a Docker container. In this Docker container, you can run TensorFlow programs in a Jupyter notebook:
$ nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu

The following command installs an older TensorFlow version (0.12.1):
$ nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:0.12.1-gpu

Docker will download the TensorFlow binary image the first time you launch it. For more details see the TensorFlow docker readme.

nvidia-docker-mod-python_2.7 for squeezedet

# Docker file below

$ docker build -t nvidia-docker-mod-tensorflow_2_7  --build-arg USER_ID=`id -u $USER` .
$ nvidia-docker run -it -u app -v ~/squeezeDet:/squeezeDet nvidia-docker-mod-tensorflow_2_7  bash
app@63bce0603d4c:/notebooks$ cd /squeezeDet/
app@63bce0603d4c:/squeezeDet$ touch t
app@63bce0603d4c:/squeezeDet$ ls t
t
app@63bce0603d4c:/squeezeDet$ ls 
LICENSE  README  README.md  data  requirements.txt  scripts  src  t
app@63bce0603d4c:/squeezeDet$ python src/demo.py
2018-01-17 05:30:43.253638: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-01-17 05:30:43.253659: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-01-17 05:30:43.253664: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-01-17 05:30:43.253668: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-01-17 05:30:43.253672: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2018-01-17 05:30:43.380710: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-01-17 05:30:43.380934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1060 6GB
major: 6 minor: 1 memoryClockRate (GHz) 1.7845
pciBusID 0000:01:00.0
Total memory: 5.93GiB
Free memory: 5.55GiB
2018-01-17 05:30:43.380948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-01-17 05:30:43.380953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y
2018-01-17 05:30:43.380959: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)
Image detection output saved to ./data/out/out_sample.png
app@63bce0603d4c:/squeezeDet$



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dockerfile


# docker build -t nvidia-docker-mod-tensorflow_2_7  --build-arg USER_ID=`id -u $USER` .
#
# nvidia-docker run -it -v ~/squeezedet:/squeezedet nvidia-docker-mod-python_3.5 bash


FROM gcr.io/tensorflow/tensorflow:latest-gpu

# By default, Docker containers run as the root user. This is bad because:
#   1) You're more likely to modify up settings that you shouldn't be
#   2) If an attacker gets access to your container - well, that's bad if they're root.
# Here's how you can run change a Docker container to run as a non-root user

# ***
# Do any custom logic needed prior to adding your code here
# ***

# install python
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update
RUN apt-get -yqq install python3.5
RUN apt-get -yqq install python3-pip
RUN apt-get -yqq install pandoc
RUN apt-get -yqq install graphviz
RUN apt-get -yqq install python-opencv

# Install app dependencies
RUN pip install --upgrade pip

# Misc. install
RUN apt-get -yqq install git

# Install Python packages
RUN pip install Cython
RUN pip install gensim
RUN pip install h5py
RUN pip install ioutils
RUN pip install matplotlib
RUN pip install numpy
RUN pip install opencv-python
RUN pip install pandas
RUN pip install pillow
RUN pip install pydot
RUN pip install pydot-ng
RUN pip install pypandoc
RUN pip install pandoc
RUN pip install seaborn
RUN pip install sklearn
RUN pip install tensorflow-gpu

RUN pip install keras
RUN pip install keras_diagram
RUN pip install opencv-python
RUN pip install easydict
RUN pip install joblib

# Python 2.7 GPU support

# select a url to install based on your version requirements
# https://www.tensorflow.org/install/install_linux#the_url_of_the_tensorflow_python_package

#ENV tfBinaryURL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0-cp35-cp35m-linux_x86_64.whl

ENV tfBinaryURL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0-cp27-none-linux_x86_64.whl

RUN pip install --upgrade $tfBinaryURL

ARG USER_ID
RUN useradd --shell /bin/bash -o --create-home --user-group -u $USER_ID  app

ENV SQDT_ROOT=/squeezeDet

# Change to the user.
USER app


#
# ENV DATA_USER=depappas
#
# ENV DATA_GROUP=depappas
#
# RUN chmod -R 700 ${SQDT_ROOT}
#
# RUN groupadd -r ${DATA_GROUP} \
#   && useradd -r -g ${DATA_GROUP} ${DATA_USER}
# RUN mkdir -p ${SQDT_ROOT} \
#   && chown -R ${DATA_GROUP}:${DATA_USER} ${SQDT_ROOT}
# VOLUME ${SQDT_ROOT}
#
# # Change to the app user.
# USER ${DATA_USER}

Friday, January 5, 2018

Test a modified nvidia-docker image

$ nvidia-docker run -it nvidia-docker-mod   nvidia-smi

Fri Jan  5 08:37:59 2018      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98                 Driver Version: 384.98                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   40C    P8     7W / 200W |    510MiB /  6072MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+
                                                                              
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+


$ nvidia-docker run -it nvidia-docker-mod python -c "import tensorflow; print(tensorflow.__version__)"

1.4.1

$ nvidia-docker run -it nvidia-docker-mod python -c "import tensorflow as tf; sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))"

2018-01-05 08:41:17.320251: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-01-05 08:41:17.438555: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-01-05 08:41:17.438790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.36GiB
2018-01-05 08:41:17.438806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
2018-01-05 08:41:17.462020: I tensorflow/core/common_runtime/direct_session.cc:299] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1

Test a Python Tensorflow with GPU setup

# 12. test that tensorflow links with libcudnn

workon deep learning_3.5
python
>>> import tensorflow

# 13. Check the Tensorflow version

python -c "import tensorflow; print(tensorflow.__version__)"

# 14. Test that Tensorflow is using the GPU

python -c "import tensorflow as tf; sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))"

##### Expected Output #########

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 1060 6GB
major: 6 minor: 1 memoryClockRate (GHz) 1.7845
pciBusID 0000:01:00.0
Total memory: 5.93GiB
Free memory: 5.66GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0
I tensorflow/core/common_runtime/direct_session.cc:257] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0

# http://stackoverflow.com/questions/43335531/how-to-use-sse4-1-instructions-without-install-tensorflow-from-source

# 15. Setup Keras
For best performance, set `image_data_format="channels_last"` in your Keras config at ~/.keras/keras.json.

# Or use env vars

export KERAS_BACKEND=tensorflow

# 16. test Keras
git clone http://github.com/rcmalli/keras-squeezenet.git
cd keras-squeezenet
python3.5 test.py

Using TensorFlow backend.
Downloading data from https://github.com/rcmalli/keras-squeezenet/releases/download/v1.0/squeezenet_weights_tf_dim_ordering_tf_kernels.h5
4530176/5059384 [=========================>....] - ETA: 0s2017-10-18 20:57:06.845299: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-18 20:57:06.845317: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-18 20:57:06.845321: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-10-18 20:57:06.845324: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-10-18 20:57:06.845326: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-10-18 20:57:06.952926: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-10-18 20:57:06.953146: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 1060 6GB
major: 6 minor: 1 memoryClockRate (GHz) 1.7845
pciBusID 0000:01:00.0
Total memory: 5.93GiB
Free memory: 5.79GiB
2017-10-18 20:57:06.953159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-10-18 20:57:06.953162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-10-18 20:57:06.953167: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)

(deep_learning_3.5) depappas@berlin:~/keras-squeezenet


# 17. Install TFLearn
# http://tflearn.org/installation/
# TFLearn Installation
# To install TFLearn, the easiest way is to run one of the following options.
# For the bleeding edge version:

pip install git+https://github.com/tflearn/tflearn.git

# For the latest stable version:

pip install tflearn

#You can also install from source by running this command (from source folder):


python setup.py install

Modifying the nvidia-docker Docker image

Let's take the nvidia-docker image and add a Python environment for deep learning.

$ docker build -t nvidia-docker-mod .

$  nvidia-docker run gcr.io/tensorflow/tensorflow:latest-gpu  bash



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
FROM gcr.io/tensorflow/tensorflow:latest-gpu

# By default, Docker containers run as the root user. This is bad because:
#   1) You're more likely to modify up settings that you shouldn't be
#   2) If an attacker gets access to your container - well, that's bad if they're root.
# Here's how you can run change a Docker container to run as a non-root user

## CREATE APP USER ##

# Create the home directory for the new app user.
RUN mkdir -p /home/app

# Create an app user so our program doesn't run as root.
RUN groupadd -r app &&\
    useradd -r -g app -d /home/app -s /sbin/nologin -c "Docker image user" app

# Set the home directory to our app user's home.
ENV HOME=/home/app
ENV APP_HOME=/home/app/my-project

## SETTING UP THE APP ##
RUN mkdir $APP_HOME
WORKDIR $APP_HOME

# ***
# Do any custom logic needed prior to adding your code here
# ***

# Copy in the application code.
ADD . $APP_HOME

# Chown all the files to the app user.
RUN chown -R app:app $APP_HOME

ENV export WORKON_HOME="$HOME/python_virtual_env"

RUN echo "/usr/share/virtualenvwrapper/virtualenvwrapper.sh" >> /etc/bash.bashrc

# install python
RUN apt-get update
RUN apt-get install -yqq python 
RUN apt-get install -yqq python-pip
 
# Install app dependencies
RUN pip install --upgrade pip
RUN easy_install --upgrade pip


# 9. Misc. install
RUN apt-get -yqq install apt-utils
RUN apt-get -yqq install pandoc
RUN apt-get -yqq install graphviz
RUN apt-get -yqq install pandoc

# 10. Install Python packages
RUN pip3 install keras 
RUN pip3 install h5py
RUN pip3 install numpy
RUN pip3 install matplotlib 
RUN pip3 install gensim 
RUN pip3 install ioutils 
RUN pip3 install Cython
RUN pip3 install opencv-python
RUN pip3 install keras
RUN pip3 install sklearn
RUN pip3 install pypandoc
RUN pip3 install pandoc
RUN pip3 install keras_diagram
RUN pip3 install tensorflow-gpu
RUN pip3 install h5py
RUN pip3 install seaborn
RUN pip3 install python-flake8
RUN pip3 install pandas
RUN pip3 install pydot
RUN pip3 install pydot-ng
 # Python 3.5 GPU support

ENV tfBinaryURL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.3.0-cp35-cp35m-linux_x86_64.whl

RUN pip3 install --upgrade $tfBinaryURL

# Change to the app user.
USER app

Thursday, January 4, 2018

imageio.core.fetching.NeedDownloadError




import matplotlib.pyplot as plt
import cv2
import os, glob
import numpy as np
from moviepy.editor import VideoFileClip

%matplotlib inline
%config InlineBackend.figure_format = 'retina'



---------------------------------------------------------------------------
NeedDownloadError                         Traceback (most recent call last)
~/anaconda3/envs/car-finding-lane-lines/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py in get_exe()
     81             exe = get_remote_file('ffmpeg/' + FNAME_PER_PLATFORM[plat],
---> 82                                   auto=False)
     83             os.chmod(exe, os.stat(exe).st_mode | stat.S_IEXEC)  # executable

~/anaconda3/envs/car-finding-lane-lines/lib/python3.6/site-packages/imageio/core/fetching.py in get_remote_file(fname, directory, force_download, auto)
    101     if not auto:
--> 102         raise NeedDownloadError()
    103 


https://github.com/david-gpu/srez/issues/18


 $ python
Python 3.6.3 |Anaconda, Inc.| (default, Nov 20 2017, 20:41:42)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import imageio
>>> imageio.plugins.ffmpeg.download()
Imageio: 'ffmpeg.linux64' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg.linux64 (27.2 MB)
Downloading: 28549024/28549024 bytes (100.0%)
  Done
File saved as /home/joesmith/.imageio/ffmpeg/ffmpeg.linux64.