profile photo

Installation and setting up your Environment

Code  |  Dataset  |  Notebook  |  Project  |  About Me

Setting up your environment and installing tensorflow a.k.a the need for military level precision in resolving dependency conflicts


Install TensorFlow and Object Detection API on your x86 Linux PC

pip install --ignore-installed --upgrade tensorflow==2.7.0
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000]))

Setup Folder Structure
mkdir TensorFlow
cd TensorFlow

Clone and Install the Object Detection API
Source
git clone https://github.com/tensorflow/models
cd TensorFlow/models/research

Install ProtoBuf
protoc object_detection/protos/*.proto --python_out=.

git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
conda install cython
make
cp -r pycocotools /TensorFlow/models/research/
cd TensorFlow/models/research/
cp object_detection/packages/tf2/setup.py .
python -m pip install --use-feature=2020-resolver .
python object_detection/builders/model_builder_tf2_test.py

Alternatively:
Install Anaconda
conda create -n tensorflow pip python=3.9
conda activate tensorflow
The spec file used to create an identical environment can be found here.
To use the spec file to create an identical environment on the same machine or another machine:
conda create --name myenv --file spec-file.txt

To use the spec file to install its listed packages into an existing environment:
conda install --name myenv --file spec-file.txt

Training the Model and Inferencing
Convert xml into csv
python xml_to_csv.py xml images/ annotations/train_labels.csv
python xml_to_csv.py xml test_images/ annotations/test_labels.csv

Generate .record files
python generate_tfrecord.py -x images/ -l annotations/label_map.pbtxt -o annotations/train.record
python generate_tfrecord.py -x test_images/ -l annotations/label_map.pbtxt -o annotations/test.record

Download model extract and give path in pipeline.config
Make an empty folder where trained model checkpoints will be stored
Give these folder and pipeline config paths in the command below

Train your model
python model_main_tf2.py --model_dir=models/efficientdet_d0_coco17_tpu-32 --pipeline_config_path=/home/pranay.mathur/TensorFlow/workspace/eff_det/models/efficientdet_d0_coco17_tpu-32/pipeline.config

Export your Model
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path models/efficientdet_d0_coco17_tpu-32/pipeline.config --trained_checkpoint_dir models/efficientdet_d0_coco17_tpu-32/ --output_directory final_model/

Perform Inferencing
python video_inference.py --model final_model --labels annotations/label_map.pbtxt




Setting up your Raspberry Pi 4 - Raspberry Pi OS Buster aarch64

Source

Setup and Installation

Create a virtual python3 environment
source tflite/bin/activate
python -m venv tflite
pip install gdown
sudo apt-get install libportaudio2

Install TensorFlow Lite for Python
python3 -m pip install tflite-runtime

Clone the Tensorflow examples repository and install tflite
git clone https://github.com/tensorflow/examples --depth 1
cd examples/lite/examples/object_detection/raspberry_pi

The script install the required dependencies and download the TFLite models.
sh setup.sh

Perform inference on the Raspberry Pi 4
Download the test video
gdown 1O8sTOCbTI0bmJTaZhbOr40dgPJSWVNYz

Download the quantized model
gdown 1-3ZxeGXyJhshmpE7Vrc8jxo_zkE0GKCB
To test on our output video replace line 44 with:
cap = cv2.VideoCapture("output.avi")

Perform inference
python3 detect.py --model cone_detection.tflite

Perform inference with EdgeTPU
**The following 4 commands are not necessary of you only want to run a pre-trained model**
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
sudo apt-get update
sudo apt-get install edgetpu-compiler

Install PyCoral
python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral~=2.0
git clone https://github.com/google-coral/pycoral
cd pycoral/examples/

Run Example
python3 detect_image.py \
--model cone_detection_edgetpu.tflite \
--labels cone_labels.txt \
--input image.jpg \
--output result.jpg
Perform Inference on the Video or Live feed
python3 detect.py --model cone_detection.tflite --enableEdgeTPU



Back to Main

 ~  Email  |  CV  |  Github  |  LinkedIn  ~