Tools and tricks to make AI development faster and simpler
Setup library
sudo apt install graphviz
pip install gprof2dot
Analyse Python program main.py
and save profiling result in profile.pstats
python -m cProfile -o profile.pstats main.py
Plot profiling result to graph as svg file
gprof2dot -f pstats profile.pstats | dot -Tsvg -o main_profiled.svg
Verify your Ubuntu version
Ubuntu 23.04 -> OpenCV 4.6.0
Ubuntu 22.04 -> OpenCV 4.5.4
Ubuntu 21.04 -> OpenCV 4.5.1
Ubuntu 20.04 -> OpenCV 4.2.0
Ubuntu 18.04 -> OpenCV 3.2.0
Ubuntu 16.04 -> OpenCV 2.4.9.1
Install cpp build tools (just in case on fresh Ubuntu)
sudo apt update
sudo apt install build-essential
Install pkg-config for easier to make cpp without declaring tons of flag
sudo apt install pkg-config
Install prebuilt opencv cpp lib (opencv-contrib included)
sudo apt install libopencv-dev
Create new simple opencv-called cpp file mainprog.cpp
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/tracking.hpp> // only in contrib
using namespace cv;
using namespace std;
int main() {
cout << "The current OpenCV version is " << CV_VERSION << "\n";
return 0;
}
Build
- For OpenCV version 4
g++ mainprog.cpp -o mainprog `pkg-config --cflags --libs opencv4`
- For OpenCV version 2, 3
g++ mainprog.cpp -o mainprog `pkg-config --cflags --libs opencv`
Run
./mainprog
More on https://icrawler.readthedocs.io/en/latest/builtin.html
from icrawler.builtin import BaiduImageCrawler, BingImageCrawler, GoogleImageCrawler
google_crawler = GoogleImageCrawler(
feeder_threads=1,
parser_threads=1,
downloader_threads=4,
storage={'root_dir': 'your_image_dir'})
filters = dict(
size='large',
color='orange',
license='commercial,modify',
date=((2017, 1, 1), (2017, 11, 30)))
google_crawler.crawl(keyword='cat', filters=filters, offset=0, max_num=1000,
min_size=(200,200), max_size=None, file_idx_offset=0)
bing_crawler = BingImageCrawler(downloader_threads=4,
storage={'root_dir': 'your_image_dir'})
bing_crawler.crawl(keyword='cat', filters=None, offset=0, max_num=1000)
baidu_crawler = BaiduImageCrawler(storage={'root_dir': 'your_image_dir'})
baidu_crawler.crawl(keyword='cat', offset=0, max_num=1000,
min_size=(200,200), max_size=None)
- Install YOLO
pip install ultralytics
- Detect objects and crop results to another folder inside
execute_path/yolo/
folder
yolo detect predict model=yolov8x.pt save_crop project='yolo' source='input_folder_or_file'
def get_color(number):
""" Converts an integer number to a color """
blue = int(number*50 % 256)
green = int(number*30 % 256)
red = int(number*103 % 256)
return blue, green, red
- Multi-turn open-ended questions
- Radar plot
- Bench field: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, Humanities
pip install onnx-tool
import onnx_tool
modelpath = 'resnet50.onnx'
onnx_tool.model_profile(modelpath, None, None)
The final MACs total count is actually the FLOPs count